Polygonal Mind
  • Home
  • SERVICES
  • Projects
  • Decentraland
  • Blog
  • Assets
  • / CryptoAvatars

Blog

Substance Painter: Tips and Tricks

3/30/2021

0 Comments

 
The Mission
Resources 
Substance Painter is a PBR texturing software that has a multitude of tools that allow us to work in different ways.

​Learn advanced texturing methods.
  • Substance Painter
Substance Painter Tips and Tricks
Gradients with position map

Gradients in substance painter can be complex to do if we don't use the right tool, for this example we are going to make a gradient from red to gray in this character's pants.

To start we will need to have the position map baked, If it isn't, you can bake it directly in substance painter.
​
We start by creating a fill layer on top of the base color layer.
Gradient layer color layer
We create a mask on the gradient layer and add a 3d linear gradient generator.
3D gradient layer
Inside the generator we have the options of 3d position start and 3d position end, for the gradient to work correctly we have to pick the color of the model's position map.
Grafient position map
When we return to the material display mode you will see the result of the gradient.
Gradient color layer
Bake lighting

Baking lighting is a very simple and very useful process that can help us to highlight parts of the model and if the final model is not going to be affected by real lighting it can provide a more realistic touch.
​
We will start by creating a fill layer with the properties that we want to give the light to which we will add a light generator.
Bake lighting fill layer
Inside the generator we will find the properties of the generator such as the direction and the height or the intensity.
​
Once we have everything configured to our liking we will have to change the layer fusion option in case the light can work well with overlay, soft light although the best way is to try which one best matches the result we are looking for.
Light layer fusion
Light layer fusion
Anchor points

The anchor points are used for the intelligent masks to detect deformations of normals made in other layers that are not mapped to the normals map.
​
In this example we have a layer with height and we want the mask that we are going to create next to paint the borders automatically.
Anchor point intelligent mask
The first step is on the deformation layer to create an anchor point and create a layer with a smart mask, this layer will always have to be above the layer with the anchor point.
​
How can we check if we activate the moment layer it still does not work correctly since it does not affect the edges of the deformation that we have created in the previous layer.
Anchor point intelligent mask
To solve it we must enter the options of the smart mask and modify some attributes.
​
The first is in microdetails change the first two parameters to true the second is at the bottom, in Micro height, there we select anchor points and look for ours (its recommended to name the anchor points correctly) after doing this everything should work correctly.
Micro details menu
Ambien occlusion menu
Anchor point intelligent mask
Work the roughness 

The roughness map is one of the necessary ones in a PBR material, and also one of the most important when it comes to giving detail to a model, so it is important that in the PBR models the roughness has work and detail.
​
In the first image we have a model with a practically flat roughness, in the second image we have the same model with a much more worked roughness.
roughness map weapon
roughness map weapon
To achieve these results we can use smart masks, occlusion and curvature masks and of course textured brushes to give wear details.
​
Although it may seem very basic, the brightness changes of a material are what help to give a model realism.
Layer blending mode

Blending mode
Although this is something very obvious since it is used in most programs that work by layers in substance it is really powerful this is because we can select different fusion modes for the different channels.
​
In the upper area of the layers you will find the channel selector and in the right part the layer fusion options.
Normal blending mode
Example of Normal blending mode.
Soft light blending mode
Example of Soft light blending mode.
Picture
​Nestor Llop
3D Artist
I am a 3D artist who is passionate about sculpting creatures and characters.
Instagram
0 Comments

Tips and Tricks to animate in Blender

3/23/2021

0 Comments

 
The Mission
In other articles we have described some of the aspects of how to animate in blender. In this article I wanted to add more tips to make better rigs and how they work in Blender.
​
This article also explains a few shorcuts an where to find the tools in Blender, if you change the settings to the industry ones.
Resources
  • Blender 2.8 & Superior
​
Tips and Tricks to animate in Blender
Double Joints or Add a Bone between two Bones

Sometimes when we try to animate arms and legs with two bones, sometimes it produces strange clipings, in a simple mesh for a mobile game or low poly game, this step is not really necessary but if you want to fix this small issue or you are triying to create a more complex rig, you shall try add double joints. Which coinsist in adding a new small joint between the tow joints where the clipping is happening. 
Clipping bone
Model with clipping when the joints are rotating 
In order to do that first we select the bone we want to rotate and press Shift + P, to clear the parent in Edit mode, and we move the joints to create a small separation between them. 
Double Joints Blender
1. Press shift+ p to Clear the parent
Double Joints Blender
2. Separate the joints
Once you have the joints separated select the two of them, remember the order is important, select, for example it is a leg, first select the hip and then the shin. Press right click on the mouse and select Fill between Joints to create a bone in that space. Finally rename the new bone on the right panel and make that is the parent of the shin.
Double Joints Blender
Select Fill between joints
Bone Menu Blender
Make Sure the parent of the shin is the Knee
This small fix will also help you in creating the inverse kinematics. The example depicts an imaginary leg, to make easier to identfy the different parts of the model, but you could repeat this in arms and other type of joints.
Clipping bone rotation
Model with better rotation on the joint
Inverse Kinematics

With inverse kinematics, the program calculates the position of the joints based in a given position and rotation relative to the start of the chain. How can we do this in blender? To explain it we will use the "leg" model we used before.
​
First we will select the feet joint in edit mode and press CTRL + E to extrude a new joint we will rename it feetIK, reapeat the shame to create a point in the knee to act as pole target. Then press SHIFT + P to clear parent of both joints. Move the KneeIK away from the model to where you want the leg to bend to.
Extrude bones
Extrude a new bone pressing ctrl + e 
Extrude menu
Or press the tool on the left panel
Kneel joint
KneeIK away from the joint
Then got to pose mode and select the shin bone, go to the right panel bone constraint menu and select Inverse Kinematics. Select as the target the armature, and then pick the bone you want to control the IK, in this case the FeetIK.

Choose the armature as the pole target (where chain of bones will try to bend to), select the bone you want it to go, in this case the kneeIK.

Change the pole angle if the model rotates in a different angle. And change the chain length, if you have souble joints select 3, if not select 2.
​
Select the the feet bone in pose mode, and in bone Constraint properties select Copy rotation. Select the armature as the target and the feetIK as the bone.
Bone Menu Blender
Properties for inverse kiematics
Bone Menu Blender
Properties of copy constrains
If the Inverse kinematics is bends the wrong way or is going stray without bending that means that you joint needs to be closer, to the bending point. Go to edit mode and move the joint then check the inverse kinematics.
kinematics model
kinematics model
Final Inverse kinematics on the model
Custom Bone Shapes

In the last segment we saw how to create a bone to create the inverse kinematics but seeing a lot bones with the same shape can be confusing. But we can change the shape to make it easier to identify.
To do that first we will create a new collection and we call it BONES, inside the new collection we will create a new shape, it can be any mesh you want, it doesn't have to be complete, or even curves.
Bone Shapes menu
Once we have the meshes selected, go to the bones you cretaed before select the want you want to change the mesh of, and go to display properties, custom shape. There select the mesh you want to change it for.
Mesh custom shape
Example of a mesh on the bone collection
Mesh custom shape
Changing the shape
​Sometimes the mesh doesnt look right when you change it, but you can edit in on the original mesh an it will change on the bones too. Remember to apply the changes from the object menu anytime you change something. You can only view the custom shapes on pose mode.
Bones original meshes
Original meshes
Bones changed meshes
Changed meshes
Default Bone Display & Color Coding

To further edit your bones, you can change the shape you view them, just go to the menu on the right panel called Object Data Properties, it has running man as an icon, and click viewport display. There you can check in front to always see the bones in front of your model. and change their shape, by default they are octahedral.
Octahedral bones
Octahedral
Stick bones
Stick
B-Bones
B-bone
Envelope bones
Envelope
You can also change the color of the shapes you made to substitute the bones, select the bone and on the Object data properties, bone groups, create a new group and change the defaul colour. It's recomendable if you have symmetric bone select diffrent color for the left and right side.
Menu color bones
Menu to change the color, press plus to create new groups
Controller bones colored
Controllers with the new colors
Pose Libraries

In blender you can save different poses, to access them inmediatly. In order to do that, put the bones in the pose you want to save, select the bones. Go to object Data Properties, pose library, there create a new pose library. Important check the shield on the right side the name, because if you don't change it even if you save the scene you will lose the pogress you made in the pose library.
  1. Shield, remember to check it to save it
  2. add a new pose, select all the bones to save the position.
  3. Deletes the pose.
  4. applies the pose to the bones
Pose libraries bones
I recommend to save the base rig in a initial pose, so that you dont lose the original rig.
Pose Library rig
Pose Library fo the leg
Conclusion

This are a small collection of tricks that I recopilated after having to animate some models that were needed in GLTF. It was a bit difficult since Blender is not friendly with people that learned maya or 3d max, or any kind of industry software.

Thought they are making advances in making it easier to learn this software, making it able to put the industry controls, for example. It still needs work.
​
Some controls change when you change to industry standards, that's why this guide it's for those that learned softwares like Maya but want or have to use blender, learn how the controls and where the basic things to work are.
Picture
Laura Usón
​​3D ARTIST
Passionate about videogames, movies and creatures. artist by day and superhero at night.
CENT
0 Comments

Developing Headquarters at Decentraland

3/17/2021

0 Comments

 
Premise
"Create, explore and trade in the first-ever virtual world owned by its users." With this welcoming sentence Decentraland invite us to dive into one of the first blockchain-backed platforms that enhances an alternate reality to express our ideas and projects into the world.
​
Launched in February 2020, Decentraland has seen its potential grow exponentially as different blockchain-related companies and projects have placed a headquarter or contact point to connect bridges with their audience among the metaverse.

The mission

Keeping this growing trend in mind, Nugget's News approached us to create a brand new, different social point where people could meet and engage with the company, focused mainly in learning and tracking trends in the blockchain world.

So the mission was to design a space where the people could learn about crypto, about the company, connect in different ways and attend events hosted in the building.
Resources

  • Unity Editor (2018.3.6f1)
    unityhub://2018.3.6f1/a220877bc173
 
  • Decentraland SDK
    https://docs.decentraland.org/development-guide/SDK-101/
​
  • Unity Decentraland Plug-in
    https://github.com/fairwood/DecentralandUnityPlugin
Developing Headquarters at Decentraland Nugget's News
Finding the headquarter purpose

Ahh, yes here we are at the start point of this journey. When it comes to develop buildings with a specific goal we must ask first what is going to be used for and the initial division of their space and how much space do we have. Two words: What and Where.

What does this building gathers?
​

Nugget's News had a clear vision on how its content had to be divided and placed:
  • 1st floor: General information about the company, news and social links.
  • 2nd floor: Learning about cryptocurrencies with supporting videos, images and out-links.
  • 3rd floor: An event area to host different kinds of event.
General information floor
Where is this building at?
​

Having our general info distribution by floors, we had at our disposal a 2x2 LAND state at Decentraland, these meant:
  • 32 meters length in both X and Z axis. 46 meters height (Y axis).
  • 40k tris for all the models present in scene.
  • 46 materials and 23 textures to color design our scene.
* Among other minor specs:
​
Max Scene Limitations ECS
Scene Limitations Decentraland
Initial ideas, visual conceptualization

The project came in to modernize a previous building designed for the same purpose, but the client wanted a more unique approach rather than a realistic building one.
Nugget's News old design
One thing I personally love about the metaverse is that the reality-rig is set but anyone but you, the creator.

What does this mean?
​
In Decentraland, architecture is not rigged by nature laws and the sky is the limit (46 m, pun intended).
When it comes to unleashing imagination. You don't have to care for concrete covers, pipes or electrical equipment. There is not such thing as supporting pillars or weight constrains and everything you make it has a purely esthetical reason.
​
Basically:
The special touch
Instead we play with geometrical constrains, shader limitations and runtime overloads. A different kind of game with similar functional results.

The resemblance to a real-life building is inevitable but we can give a special touch to the art-style of it instead to make it unique.
Following the need to modernize its appeal, for this project we played with the idea of having a hangout area, learning place and screening place in a building that had a sobriety but elegant design. We chose modernist architecture as a main reference and mixed it with the idea of a giant logo surrounding the building. 
Nugget's News new design
Nugget's News new design
Blockout, helpful space design

After the references have been set, it's key to start developing a mock-up or quick model that draws the volume boundaries of the building and mixes properly what we want (#00) and how we want it (#01) based on the purpose of the building and the references chosen. 
Nugget's News new design
This give us a proper shot to iterate on and we can start trying it out in-engine to match sizes. Also it gives us a scope on how many objects do we have to craft and anticipate some possible constrains based on our goals.
​
From this blockout we extracted specific references for chairs, counters and every visual needed. It was time to get into shapes.
Shape development

As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design.
Shape development
As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design.

After different iterations and redesigns we get to the final design. All running in engine.
Nugget's News final design
After we have all shapes made and initial composition set, it is time for us to go to the next step, design materials and apply them.
Material design and Animation execution

As we said before, the Nugget's News building look comes from a Modernist architecture with some art-touch made by us. This means that the building has to be mainly using "modernist-like" materials such as concrete, clean wood, painted steel, brushed grey iron and a little bit of marble.

But we cannot forget this is a corporate building, so we have to mix those ideas with matching colours from the colour palette of the Nugget's News website. This is basically a palette of 5 blues and a coral variant made for contrast with the rest of the components.
​
Both main ideas mixed together result in the following material (excl. Albedo dyes)
Material textures
​Of course, creating materials is not a one shot job, we have to iterate and local-deploy constantly to reach the result we want and at the same time play with all the constrains, this means that we have to cleverly reuse all the materials along the building and minimize those that can only be in one place or are too specific. Apply different materials in different polygons set in the same mesh and etcetera. 
Material design evolution
An example on material design evolution is the wall (and floor) of this room and how it evolves until it gets nailed.

​Regarding Animations, this is something I was kind of thrilled to see done, the idea with the Nugget's News immense logo being animated and morphing during the whole experience. We found pretty interesting that the shape of the logo has an almost continue tileable curvature, so we could play with it and invert the curvature as an animation, making the logo continuously loop non-stop by just morphing.
Logo concept
A concept made to understand how the logo should deform, it exploits the fact that it already has a "natural" curvature and inverts it seamlessly.

​Technically it is something difficult to get right as the rig has to be correctly executed to drag properly the vertex by animation with the simplest model complexity possible. But once we have it right and complete the animation, we can have our hands on the next step, color and UV mapping.
Nugget's News final design
UV mapping this was something we came as the NN shape came along, as the logo itself is made out of triangles this benefits us and we can make the whole texture with a two gradients: one for blue to light gradient triangles and other for blue to dark gradient triangles.
UV mapping
Determining info, labels and other communications

Once the composition and all the visual aspects of the scene are ready to evaluation and have been checked in-engine, it is time to get our hands on designing information areas. We now basically are creating a good user experience by deep thinking and assess how the design will be experienced and ensure the best experience. This means to place everything in a way that is not tedious for the player.
​
As an example, the second floor suffered a re-composition from scratch to ensure the purpose of the area was fullfiled. The area had to contain different smaller areas with specific content not directly related to the next to it. So the idea we followed was to create an environment that naturally guided the player from the Counter (0) to the (4) after accessing the floor using the Elevator.

​This information distribution can be seen in this project with the following screens:
UV mapping re-composition
Nugget's News welcoming area
A welcoming area, with out-links that take the user to other places related with the company, like Twitter or Web.
Nugget's News general info
Multiple panels that can be a calendar of events, a motif of the company or just general info about its activity.
Nugget's News collective shift
In the second floor is placed a second welcoming counter as the floor content is related but with a different mission, if we want the player to feel welcomed and understand what's going on this is the best way. Also there is again another link board so this connection is always close to the experience of the user.
Nugget's News panel info video
Different panels with different boards to explain specific materials and contents about the company are placed along the second floor in independent cubicles. 
Nugget's News panel info video
Each cubicle has its own "Learn more" so the player goes directly to the expanded content related to what it's being told.
Nugget's News panel info video
A mid-floor between Floor 2 and 3 was set to amplify the use space and maintain everything airy.
Nugget's News elevator
During the whole displacement along the levels, you can see where are you heading.
Nugget's News conference area
A general view of the conference area, spacious and set to gather big meetings with announcements or "Ted talks".
Nugget's News conference area
And a viewpoint on the same stage.
Conclusion

Once everything is in place and with the correct information set, there is nothing more to do but to upload our content to the metaverse and let the people experience it.
​
This project has given us a fantastic opportunity to create a headquarter that had no relation with our daily work, as we are involved indirectly with cryptocurrencies but we focus in 3D developments and experiences. I do really hope to get my hands again on such different things as building the worlds others imagine.
Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
twitter
0 Comments

Optimizing VRChat Worlds: Static Batching

3/10/2021

0 Comments

 
Premise

​VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.


The mission

This guide focuses on one of the best practices when developing videogame environments, the static batching.

We will discuss the value of static GameObjects and the main differences between the static options. Last we will check the Avatar Garden (100 Avatars world) and how it is set in scene so you have a reference point on it.
Resources
​
  • Unity Editor (2018.4.20f1)

​Please note that this is VRChat focused but is something Unity Engine related issue.
When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge.

Static Batching VRChat
What is the Static value

If a GameObject does not move at runtime, it is known as a static GameObject. If a GameObject moves at runtime, it is known as a dynamic GameObject.
Static GameObject menu
The static bool (or drop down) is the way we tell the engine to pre-compute calculations of the GameObject and make them remain still, this way the Unity saves on runtime and only focuses on them once, after that moves on and only focuses on dynamic objects that have their calculations in each frame.
​
By clicking its checkbox we will automatically check/uncheck all the Static types we have at our disposal. Unity will ask you, if you have children attached, to apply or not these values to all the hierarchy.
To apply only one type of static to an object is as easy as clicking on the dropdown arrow next to Static and mark all your desired values.
​
Static can make a huge difference in how performance behaves in Unity, is a 101 in optimizing your work and getting it running into potatoes. The whole Static well used is the secret for smooth fps.
Game view statistics
In the Game view we can find a little tab called "Stats" there we can see how many batches are being processed by frame in the current FOV of the Camera set in scene. Without Lightbakes, Occlusion Culling, Batching Static and others this could easily rise as every feature needs a single calculation by object.

This means (among others): calculate mesh visualization, apply material (1 batch per shader pass), lighting (if realtime 1 batch per object), realtime effects like particles, skybox... So you can get that Static-ing your objects is crucial, specially the one that tells the engine to don't modify anything from its render/transform: Static Batching.

Before we get our hands on Static Batching, we will unravel the different types and do a brief explanation and usage:
​
  • Static value:
    • Contribute to GI: Consider the GameObject when computing Lighting Data.
    • Occluder Static: Turn this GameObject an Static Occluder.
    • Occludee Static: Turn this GameObect an Static Occludee.
    • Batching Static: Consider this GameObject to merge its mesh/meshes with other GameObjects set to Batching Static.
    • Navigation Static: Consider this object when computing Navigation Data.
    • Off Mesh Link Generation: Attempt to generate an Off-Mesh Link. (Navigation Data)
    • Reflection Probe: Consider this object when rendering Reflection Probes.

You can find information about creating a good Occlusion Culling system in VRChat here:
  • Optimizing VRChat Worlds: Occlusion Culling
The Static Batching in the Gauguin Avatar Garden

The Static Batching tells the Engine to combine similar meshes in order to reduce its calling count. But what is similar for Unity?

Unity will combine all meshes that have the Static Batching checked and have the same materials, in order words, Unity engine combines by material.

This reduces notably the amount of times its calling to render a mesh without killing its independence as an object. Combines the mesh but still considers all the GameObjects and their respectively components (Transform, Physics...) but they won't move. So Animated objects are off the table as they are Dynamic Objects and should be performance improved using other ways.

As an example, here in the Avatar Garden we find that we have repeated along all the environment a set of assets that vary in Transform values but remain the same in all the others (Material settings), all these objects are marked as Static so Unity only renders:
​
  • Once all the "orange mesh" GameObjects.
  • Once all the "light-green mesh" GameObjects.
​
... and so on with every Material match between all the Static Batched GameObjects.
Static Batching Unity
The result of Static Batching led Unity to combine all the meshes that used the same Water material or combining all the meshes that used the ground material.
Water mesh on runtime
Ground mesh on runtime
If you animate a GameObject by its transforms and has the Static Batching checked it won't move on runtime. Meanwhile if you have an Animation based on shader values it will get animated. This means that Shader modification on runtime is still possible for Static Batched meshes.
Animation water mesh
The river in the Avatar Garden is an animation of the values of the Material Shader. The engine is told to interpolate two values in an amount of time and it does it through the Shader, this allow us to make this type of effects and at the same time maintain low the draw calling rate.
​
Also notice that the shadow doesn't get displaced during the animation, this is due to the UV space they are using. Standard shader uses the UV0 for input texture maps like Albedo, Metallic, Smoothness etc and so all the values modified during runtime only affect the UV0 channel of the model. On the other hand lightmaps make use of the UV1 and they are not modified during runtime.
UV maps used in Unity
For more information on how we created the river in the Avatar Garden, check our article about it!
  • Creating water rivers in VRChat
Reducing draw calls

To draw a GameObject on the screen, the engine has to issue a draw call to the graphics API (such as OpenGL or Direct3D). Draw calls are often resource-intensive, with the graphics API doing significant work for every draw call, causing performance overhead on the CPU side.
To reduce this impact, Unity goes for Dynamic batching or Static batching as we stated previously, good practices to reduce draw calls include:
​
  • Combine different objects that are going to share the same Material.
  • Pack different and independent objects in one bigger Atlas Texture.
Texture packed
Asset using texture meshes
This is an example of Atlas packing, all the objects from the exterior are using the same texture maps and so they will be draw called once.
​
The Avatar Garden uses tileable textures and so it reduces too the number of materials. For color depth and variation we used Lightbaked lighting.
Conclusion

The static value is a must in the improvement of performance during runtime in Unity. Well executed can heavily impact on all machines and can let people with low-end machines running high-end experiences.

You are more than welcome to drop any question or your try-out and results!
Join us: https://discord.gg/UDf9cPy
​

Additional sources:
https://docs.unity3d.com/Manual/GameObjects.html
https://docs.unity3d.com/Manual/StaticObjects.html
https://docs.unity3d.com/Manual/DrawCallBatching.html
Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
TWITTER
0 Comments

Optimizing VRChat Worlds: Collision Debugging

3/2/2021

0 Comments

 
Premise
VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.

The mission
Resources
This guide focuses on the Physics Debugger function of Unity and how to set up your 3D Colliders in scene and how to manage triggers in VRChat.

Having a simple colliding set up will help calculations on low-end machines as the physics interactions will be lower and simpler.
  • Unity Editor (2018.4.20f1)
  • VRChat SDK2
Please note that this is VRChat focused but is something Unity Engine related issue.
​
When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge.
Optimize VRChat worlds Collision Debugging tutorial guide
​What are Colliders

Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the same Object’s mesh.

Every time you get yourself into a game, the player moves or you have "physical" interactions with the environment such as dropping a glass or throwing a rock collisions are getting in to work and behaving as solids, triggers or gravity affected objects.

This sorcery works under default parameters but as in every game-engine you can set up colliders at your liking to match your desired interaction results. This definition could be applied to being able to walk on the firm ground but sink progressively as you go into muddy dirt or enter a flowing river.

​Colliders are basically interactions between objects that send messages to the engine to determinate if the object is colliding, and therefore can't proceed to go through it; or its triggering and then it can enter the other collider volume and (if set so) send an specific message.
​
colliders essentials in video games
​Here we could state that the player is walking along the map and decides to enter the river, this literal information could translate into colliders design in a way that define at which height the player is walking on and triggers that are modifying the player speed and turn it slower as it goes further into the river. This does indeed happen in our Avatar Garden environment:
colliders essentials in video games
​As the player will walk through the river, they will pass through the water mesh walking over the river soil instead
Types of Colliders

The three types of Collider interactions present in Unity are Static, Rigidbody and Kinematic.

Each has an specific use, for example Static colliders are for GameObjects that are not meant to move or are not affected by the Gravity. Rigidbody Colliders are meant to be for objects that have "forces" applied on them and therefore gravity (and other possible set up forces) affects them in each frame (unless they are in sleeping mode). Last but not least there are Kinematic Colliders meant to be for Kinematic bodies and are not driven by the physics engine, read more about kinematics here.
​
Colliders shapes in video games
​If we recreate this mockup in Unity and press Play, the engine will make the ball fall and the rock won't move.
​
To apply a 3D Collider component into an object, we have different options at our disposal:
​
  • Box Collider: The simplest and most used type of collider, a bounding box that creates a collision volume around the mesh. Is suitable for almost any type of collider interaction and is perfect for Trigger Volumes.​
Unity Colliders: Box collider
  • Sphere Collider: Perfect for round or almost spherical objects that have to roll or you want to keep that curvature collision without using a Mesh Collider.​
Unity Colliders: Sphere collider
  • ​Capsule Collider: For Cylinder objects, is like an -extruded from the middle- sphere and is good for characters and other objects that require roundness but need tall colliders.​
Unity Colliders: Capsule collider
  • ​Wheel Collider: Suitable for Torus-shaped objects like -the name itself says- wheels. Its use is focused to Racing games or vehicles that use wheels. Applies forces on it and makes it easy to configure a vehicle that runs over different types of soil or roads.​
Unity Colliders: Wheel collider
  • ​​Terrain Collider: Acts as Collider based on the data collected from the Terrain GameObject.​
Unity Colliders: Terrain collider
  • ​Mesh Collider: the perfect option for non-primitive shapes that require complex collision. The best quick workaround to make them simpler and better for the engine is to check the "Convex" toggle box that reduces the mesh collider to 256 tris max. Also this component comes handy when it comes to custom colliders created by ourselves in our modelling toolkit.​
Unity Colliders: Mesh collider
​By default the Mesh collider will pick the Mesh Filter assigned mesh as the Mesh to represent the Collider.

The Rigidbody component goes apart of the Collider component, giving independency and control over itself and acting as an "addon" for the static collider. It does also state if the interaction of the Collider Is Kinematic or not.
Unity Colliders: Rigidbody
​Applying a Collider to a GameObject is as easy as adding the component in the Inspector Window once you have your GameObject selected. It is located in Component/Physics/ but you can also search for it by using the keyword Collider.
Unity Colliders components
​What does the Physics Debugger

​After we set our colliders into scene, the best way to previsualize and correct colliders prior testing is the Physics debugger.

You will find this window located in Window/Analysis/Physics Debugger
Physics Debugger in Unity
This window will make the colliders overdraw over your meshes like if it was adding a layer of semi-transparent objects with a color that matches a type of Collider. Red for static, Yellow for trigger, Green for rigidbody and Blue for kinematic colliders.
Physics Debugger: Collision geometry
​Here you can check/uncheck to display the Collision Geometry and also you can be able to Mouse Select directly your GameObjects by their Collider Volume.
Physics Debugger in Unity
​This window will drop a couple of features to configure to help us out as much as possible to configure and size the colliders.

You can change the colours to match the best ones for you, change the transparency or set a random to create variation between them.
​
The Physics debugger is going to be your best friend to spot flaws in your physics prior playing or even after noticing errors while testing!
Triggers in VRChat

For anyone experienced enough in game development will know that in Unity to activate a trigger you need a C# script telling the engine what to do when one Collider Triggers another Collider. The Trigger bool in the Collider Components tells the physics engine to let the other colliders go through the triggered. This is not possible in VRChat due to the custom script limitations and so it manages trigger by its Event Handler. Just add the VRC_Trigger Script and the SDK will add the Event Handler.
VRChat trigger in Unity
From this point, programming in VRChat turns visual and no real code is needed. Just to be aware that some stuff changes from place and it turns more "Artist friendly".​
VRChat trigger in Unity
​To add a behaviour as a result of a trigger, just click Add in VRC_Trigger component and start configuring your interactions. There are so many that covering a general use of this Triggers is barely impossible. So yes, the sky is the limit. Just remember that this operations can impact performance badly if they turn out to be expensive to execute.​
Applying Colliders in the Gauguin Avatar Garden (100 Avatars)

Colliders in the Gauguin Avatar Garden by Polygonal Mind are a mix of Box Colliders and Mesh Colliders because we wanted to keep it simple and under our control on some other collider volumes. But that is not a clear reference to understand why is like this.

When you get your hands on colliders, the first question you have to ask yourself is:
Why I'm doing a Collider?
Followed by:
What is going to do my Collider?
VRChat 100 Avatars world
​This two questions are essential to keep your collision complexity as low as possible. As you will want to make the Physics engine as smooth as possible to avoid artifacts in the player collision.
​
Gameplay guides collisions. There is no reason to create a collider for every thing in scene. Instead think on how the player is going to play (or how you intend them to play).​
Collision Complexity in 100 avatars VRChat world
Collision Complexity in 100 avatars VRChat world
​The big box in the back is to avoid players from going out the scene, encaging the player is a good way to free them to climb whatever without thinking of getting to the point of breaking the scene.

Once again, one of the best practices in doing game development but this time on Colliders is doing the work by hand. Don't let the engine do the math for you without telling exactly what's doing. Evaluating the best suitable collider in each occasion will give you tighter control over the process of debugging.​​
Mesh Collider match shape vrchat
For example this tree logs doesn't use a Mesh Collider to correctly match their shape when the collider comes to work, but why? There is no reason to spend a complex collision here when the player will just want to notice that there is a log in their way but nothing else.​
Mesh Collider match shape vrchat
Another example on Collider design goes here, you dont need to create a collider for everything. If we would have decided to create a collider for each small rock, the player would notice little bumps when walking and would be very uncomfortable or at least it wouldn't match the playable vision we had. Instead the ground is a Mesh Collider of the same Ground Mesh and the grass is not collideable neither.
Collider design vrchat
​And the last practical examples we are showing here, I want to point out that our trees in the Avatar Garden have not collisions on the top. Because any player can reach the high tree tops and no primitive collider worked good for the curvature of our model; we decided to create a custom model just to fulfil this Mesh Collider need.
​
Other things that we decided to use Mesh Colliders where Bushes and medium-sized plants. This was because there was no form to use primitive shaped colliders for such shapeless vegetation. We tried to keep as simple as possible the shape of all the Mesh Colliders or activate the "Convex" option to reduce to 256 tris if it was higher.​
Simple Mesh Colliders in a rock

Conclusion

In collision, when it comes to game development, physics, or at least basic physics are the second stage of an environment development so keep them always in mind when building your worlds! They can be a truly game changer on how the experience is felt and enjoyed. Keep it simple but also keep it clever!

You are more than welcome to drop any question or your try-out and results!
Join us: https://discord.gg/UDf9cPy

Additional sources:
https://docs.unity3d.com/2018.4/Documentation/Manual/CollidersOverview.html
https://yhscs.y115.org/program/lessons/unityCollisions.php
https://docs.unity3d.com/2018.4/Documentation/Manual/RigidbodiesOverview.html


Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
twitter
0 Comments

    Categories

    All
    Decentraland
    Decentraland En Español
    Maya
    Metaverse
    Mixamo
    Morphite
    Substance Painter
    The Sandbox
    Totally Reliable Delivery Service
    Unity 3D
    Updates
    VRChat

    Archives

    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    December 2019
    October 2019
    September 2019
    August 2019
    June 2019
    May 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2016

    Picture
Home
Projects
Assets

Picture
Crypto
Friendly
Picture

Subscribe to get some 💚 in your inbox once in a while.

Follow us and your visit will never be forgotten!
Picture
Picture
Picture

 © 2015-2020 POLYGONAL MIND LTD. ALL RIGHTS RESERVED.
  • Home
  • SERVICES
  • Projects
  • Decentraland
  • Blog
  • Assets
  • / CryptoAvatars