First of all, we will open “The sandbox Maker” and we will create a new Experience.
Once we have entered the new land that we have created, we will see that there are quite a few options, but we will use just a few of them to make the guide simpler.
Catalysts are divided into four different tiers: Common, Rare, Epic and Legendary.
Each tier defines the number of sockets and the scarcity of your ASSETS.
These are the different behaviours available right now:
Basic Movement Controllers
We are going to create a mission in which you have to talk to two NPC's who will guide you to an elevator, but until you talk to all of them it will not activate.
We will start the quest seleccting the assets needed.
We can use the predefinied avatars, provided by the platform.
The platform will start working when we talk to the King, as we have made the platform active as soon as the toggle behaviour is resolved.
As simple as that.
Create your own adventure with monsters, heroes, magic, supernatural beings and more.
There are many ways to create missions and totally different from this one, it’s time to let your imagination run wild.
Deploy is the standard method which our work goes through during and after development time. The action of deploying includes different types of deployments which are made depending on the purpose of the deployment.
Knowing which deploy you need for each occasion is key for a good feedback flow
Most common type of deployment for debugging iterations. This deploy is done by turning your local machine into a local server that runs on your browser the Decentraland project you have compiled or exported from Unity.
#00 Installing the Needed Resources
#01 Running your Local Decentraland Folder
The SDK is installed, now the machine has the resources to understand the folder code and executed.
You can run any type of Local Host command as it would happen if you (for example) would like to turn the night mode on or enable Web3 interactions. Just place the parameter at the end of the URL.
Vercel allows developers to run remotely local deploys of a Decentraland scene, this means that is possible to share with a client just a link in order to have them test and try out their project without directly deploying in Decentraland. This process is very similar to GitHub Pages and services like AmazonS3. The deploy is also limited by the same rules as a Local Deploy so bear in mind that the platform is not enabled inside the Vercel deploy.
#00 Gathering the Needed Resources
#01 Deploying the Folder into the Service
Platform deploy is deploying into the final land inside Decentraland. This will make your scene live for everyone to visit and enjoy what you have created. It is also a nice testing area if you are willing to try the real thing with real constrains live. Furthermore is the best way to ensure that everything is working as expected. For these matters you will need to have access to a land through your Metamask account (Operator or Owner rights).
First of all, run the start command and make sure that you are not missing any core functionality of your project.
If by any chance your project runs web3 code, double check the server integration is properly done. Run the command that enables web3 during local deploy.
Once everything is in place and working properly, run the deploy command after interrupting the local emulation (CTRL+C).
If you have been granted with operator access, just follow the steps, sign through Metamask the deploy upload and that would be all!
Any error or inconvenience happening during deploy will be shown in the command prompt log. By default is following the browser steps.
If you want to deploy into the catalyst, just add to the usual deploy command a few arguments that change the target upload.
Commands and Executions
Console Project Commands (on Decentraland Project Root directory)
Note: To preview old scenes that were built for older versions of the SDK, you must set the corresponding version of decentraland-ecs in your project’s package.json file.
Always consider which deploy suits best the debug purposes and use all the required tools at your disposal to make your project reliable and stable.
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Workspaces are essentially predefined window layouts. It is often useful to quickly switch between different workspaces within the same file.
Blender's default startup shows the "Layout" workspace in the main area.
It also has several other workspaces added by default:
In "Layout" workspace we can see diferent Editors
Basic movement controllers
However since Blender is very hotkey based, in case you want to learn the default settings, we're going to post small guide of the basic movement controllers:
The editors are so usefull, they are divided by tags (General, Animation, Scripting and Data) the most important editors that we could use frecuently are:
These are tools similar to the Deform ones, however, they usually do not directly affect the geometry of the object, but some other data, such as vertex groups.
These are constructive/destructive tools that will affect the whole Topology of the mesh. They can change the general appearance of the object, or add new geometry to it…
Unlike Generate ones, these only change the shape of an object, without altering the topology.
Those represent physics simulations. They are automatically added to the modifiers stack whenever a Particle System or Physics simulation is enabled. Their only role is to define the position in the modifier stack from which is taken the base data for the simulation they represent. As such, they typically have no attributes, and are controlled by settings exposed in separate sections of the Properties.
RC → Object Content Settings
S → Scale
E → Extrude
K → Cut
F → Fill (Edge/vertex/face)
J → Connect vertex with edges
Sift + S → Origin settings
Tab → We can swap between Object Mode and Edit Mode.
1, 2, 3 → Change between Vertex, Edge or Face edition.
Numpad → Orthographic view (front, lateral, top) Nº 0 → Enter main camera
Ctrl + R → Cut and create an edge loop.
Ctrl y RC → Selection of several vertex, edges, or faces.
Sift + RC → Selection between the active vertex, edge or face, to last choose.
Sift → Manual selection of several objects
Sift + C → Set origin in center
CTRL + E, V o F → (bridge edge loops) Connect edges/vertex nodes.
Sift + A → Menu add
Sift + F → Enter camera mode
Sift + H → Hide everything except the object we selected, H → hide just the selected.
Alt + H → Show every element hidden previously.
Ctrl + B → Bevel
Ctrl + P → Set parent
Ctrl + J → Join selected objects in one.
Alt + C → Convert in mesh
Ctrl + X → Disolve vertex, edges or faces.
Ctrl + + → Seleccion of all surface that have contact with selected
S + Z + 0 → Set all vertices selected in the same position, we can change Z for X or Y.
Shift/Alt + D → Create a copy/creeate an instantiate of selected.
Ctrl + Alt + Q → Change of view and perspective
Ctrl + G → Save proyect
There is several add-ons that you can activate in Edit → Preferences → Add-ons, they could make you happy.
Installing the plugin
First thing first, we need to install the plugin. I will briefly explain how to do it so there is no problem at all.
It's pretty simple, download the lastest version of CATs plugin using this link to Github.
Now it's time to open Blender.
Select the edit button at the top left of the screen and click on "Preferences"
Requirements and setting up the bones
Right now, you should have your avatar rigged and ready. Remember that eye tracking is completely optional and you should have prepared your avatar for it before hand.
You can get creative with the uses, but for me, I separated the pupils for the actual eyes of the character. It gives a really good overall result.
As I said, do you have you avatar completely rigged? No? Then check out this guide to make the process easier.
How to Rig you Avatar for the Metaverse using Maya LT 3D
If you have everything ready, let's create a new set of bones for the eyes.
Now that you have it set up, mirror the new bones so you have them for both eyes.
Remember to name the bones correctly. Call them Eye_L and Eye_R, so there is no problem when we get to blender and CATS plugin.
Once this is done, now you only need to skin the new bones into the pupil, so it moves along.
Now everyhting is correctly set up. Export the avatar into an fbx and open Blender!
Using the plugin
Now in Blender, import the model we just exported. BUT do not use Blender's own import tool, use instead CATS own import feature.
If you character looks like a complete mess when you imported it, hit the Fix Model button and it should hopefully fix it.
You can use the Testing tab to try out for yourself how it looking so far with the Eye tracking.
Now you can simply export the character.
Setting it up in Unity
This last step is making sure everything works inside Unity! As far as setting up the Avatar inside VRChat, it's the same as always so you can use our guide:
Create and Upload a VRChat Avatar with Blend Shapes Visemes
The only thing we have to keep in mind is to set up the new eyes. Using the Eye Look menu, we can see down below a big sub-menu where we can select both eyes bones.
Once the bones are selected, you can play with each of the 5 different options to make them match their descriptions. Remember that the value you set it up to is the maximum value. That means that the eyes wont move more than that.
A little bit of History
Don't worry! This is going to be short, although the colour has been studied from the 15th century: how to obtain colours, involving physics, chemistry and even maths. But it's not until the year 1920, that the Bauhaus School developed different theories, about the transmission of the colours and how do we view them, especially the studies of Johannes Itten, a Swiss expressionist.
It's thanks to these studies that we could develop the modern colour theory.
Johannes Itten → Zweiklang 1964
Have you ever done an image in photoshop, but when you print it the colours look different? That's because we have different methods to produce colours, where the primary colours change.
These colours are used by screens, and anything that emits light. The wavelengs of the light create the different tones, and when more light is added, the tone is brighter.
In this palette we consider the primary colours the red, green and blue (RGB). Here, white is the combination of all, and black the absence of colours.
Used by anything that reflects light, like books or other print materials, unlike the additive system here, the pigment determines its colour to the human eye depending on the light reflected.
The primary colours here are: Cyan, Magenta and Yellow (CMY), white is the absence of colour while black is the combination of all colours.
It's important to remember that pigments we have at the moment don't absorb the light completely, because of that when we mix all the colours the closest to black will be a dark, dark brown. To fix that we add a four pigment, which we call Key, hence CMYK, this four pigment is essentially Black.
Primary, Secondary and a lot more
Once we have explained the different systems, it's time to start explaining the different things that compose a colour. Mixing all of this together is when we obtain any possible colour.
This marks the position on the colour wheel, usually on programs, like photoshop, is referred in degrees (because it's a wheel), for example perfect violet is on 270 degrees.
How bright a colour is, usually it goes from 0 to 100%, being 0% the black and the 100% the white.
This components tells us how rich a colour is. Less saturation means the colour becomes a shade of grey, and a perfect colour with the full saturation, the pinkest pink, if you will.
The colour as a chameleon
As you can see on the image; you may think that inside each square, there is a square of different colours but is there?
An important rule when you are painting, the colour changes depending on it's surroundings, this is a well-known optical illusion, the truth is that the colour in the small squares it's the same.
If you think carefully is not necessary to have a deep knowledge about the theory of colour to get beautiful compositions, but it is recommendable, especially on cases like the optical illusion, because one colour can ruin a whole composition. Now I'm going to give a few examples of how to combine the colours, the usual ones. All the Palettes created here are from Adobe Color.
A single hue extended, changing the brightness and saturation.
Colors that are directly oposed in the colour wheel, this example is a simple one but you can create a palette with a double complementary, that consist on the combination of two complementary colour pairs.
Three colours that are equidistant on the colour wheel.
A group of colours that is adyacent to each other.
This is just a small introduction about the theory of colours to create a pallete. You can use the method you like the most or create one in a intuitive way. In my case the ones Ilike the most are the ones created with complementary colours combined with the analogous palette.
I think it's ideal to accentuate a part and create a point of attention. If you want to further investigate how the colour can change depending on its surroundings, you can do the same as in the gif I made.
You can do it with photshop or mixing cartulines of different colours and it's a good exercise when you are starting painting or designing.
If you already have an advanced knowledge of decentraland maybe you want to skip this guide and go straight to the finished tutorial project on github.
We'll asume that you have some basic knowledge of making a basic game in typescript using decentraland components, if you need to know more about Decentraland components check the following docs:
Entities and components | Decentraland
It would be nice to if you check before starting the Decentraland docs about using P2P messaging, but won't be necesary as we'll cover this in the guide:
About multiplayer scenes | Decentraland
Prepare your component
If you already know everything about DCL components and input on entities you can skip this part.
For the sake of this guide we'll use a very simple component to change the color of a BoxShape entity, and the spawn cube function provided by the init example DCL project. Of course you can adapt this guide to use it with your own project and components.
To our new ColorComponent we'll add some functions that will be usefull to manage the component via player input or P2P message
To get everything ready for start coding our P2P messages, we'll finish first the update from our local player input, if you don't know how to do that I recomend to check the Decentraland docs for input in entities
One last thing, lest make an array of 3 cubes to manage instead of only one.
Now we have 3 cubes that chage color, ready in a local only scene
To simulate multiple players in your local scene, open the decentraland preview in two different browser windows
First we need to define the structure of our messages with the data we want to transmite, in this tutorial this structure is simple, but in your game be careful with this part, the more information you need to transmit the slower will be te updates. Always try design your game to reduce the amount of data transmited.
Make a function to update the entities with a recived CubeState
Before making any messages we need to have an unique player ID, you can make your own system, but since the decentraland player names are unique we'll use them in this tutorial
Messages that are sent by a player are also picked up by that same player. The .on method can’t distinguish between a message that was emitted by that same player from a message emitted from other players.
Now we can start with the P2P code, make a function to capture emmited messages
And in the ColorComponent, when the local player change the color, send a message with the new color index.
The game is ready now, open the preview in two browser and check the cubes change colors in both windows.