Polygonal Mind
  • Home
  • Metaverse Builder
  • CryptoAvatars
  • MegaCube
  • Blog
  • Decentraland
  • Projects
  • Assets

Blog

Add Dynamic Bones to your character in Unity

4/7/2021

0 Comments

 
Overview
Adding bones that react to your movement can make a drastic difference on your character.
From hair, to a tail, or a skirt, making stuff move makes everything cooler and more interesting.
We added googly eyes to one of our VRChat avatars to make it more fun.
​
Because we can, and you too.
Resources
  • Unity
  • MayaLT 2019 or your preferred 3D software
  • Dynamic Bones (Unity Asset)
Add dynamic bones to your character in Unity
Adding the bones

Of course, the eyes won't move by themselves, they need a bone that will make them bounce. I'm sure you followed our rigging tutorial to easily rig your character with mixamo and fix any nasty problem, if not, be sure to check it out here:

Fix and reset your Mixamo avatar rig
We start with a rigged character. Meet the toast.
​

We didn't make any changes aside from fixing the skin so it doesn't break everytime we move a bone.

​You have your character alright? Good, now we are adding the new bones.
rigged character in Maya LT
Using the Create Joints tool, add your bones wherever you want. Make the bone chains as long as you want so it looks as smooth as posible.
skeleton menu joints
eyes rigged character
Since the eyes don't need any chain at all, we basically created the eye bones starting from the head.
​
Be sure to skin the new bones properly. Personally, we had to remove the pupils from the eyes and skin them separately.
Want to give your character even more personality? Use blend shapes visemes to add facial expresions while talking. You can easily follow our guide here

Create and upload a VRChat Avatar with blend shapes visemes

​Now, export your character making sure you have the skin weight correct and the skin checker box ticked.


Time to bounce

Next stop, Unity.
​
Be sure to have the Dynamic Bones asset installed in your project because it's what we need to be able to move the new bones.
Dynamic Bones asset installed
Check if everything is correct and the skin weight is working properly.
​
Drag and drop the DynamicBones.cs script onto your character mesh or add a new component on the inspector tab. Time for some tweaks.
As you can see a lot of stuff came out.

Lots of levers, buttons and numbers appeared which can be a little intimidating at first, but I will try to show you that's it's really easy to get really good results by just adjusting a few parameters.

It's quite easy to do, but, you will need patience to get really great result as most of the work comes from testing and adjusting what seems wrong.
​
Lots of testing.

While there are a lot of things you can touch, we will stick to the basics.
Dynamic Bones inspector menu
By default, Dynamic Bones gives pretty good results for the bones to interact with meshes and being affected by gravity, but my case is a little bit special, and we will have to adjust it correctly.
Let's start

First of all, we need to assign which bones we want to be dynamic. For that, we will select the "Root Bone", that is, the bone before all of our dynamic bones.
In this particular case, since we want to make the eyes dynamic, both eye bones are attached to the head bone. That is our root.
​
Be sure not to select the end of the bone. Rookie mistake.
Dynamic bones script
Testing

Test, test, test. Move your character. Rotate it. Make sure it does what you want. You can get a lot of different effects by just adjusting a couple of parameters.
Testing dynamic bones
This is definetly not what we want. While the eyes move accordignly on the Y and X axis, we don't want to move in the Z axis.
freezing axis
Luckily, you can freeze any axis you want, so you can avoid these kind of problems.
​
Now the eyes won't pop out of their sockets, which is, to say the least, nice.
Eyes in place, but we need to tweak how the eyes move and behave when moved. These are all the options in the image below.
Damping: Adjusts how fast the bone will come to a stop.

Elasticity: Adjusts how much the bone is pulled back into its default position.

Stiffness: Adjust how much the chain of bones will move and bend.
​
Inert: A multiplier for how much the character's velocity is ignored when calculating velocity.
how the eyes move
Knowing what each option do, now is time to test. Tweak some settings and try by yourself. 
dynamic bones eyes elasticity
For the eyes, we adjusted the elasticity, the stiffness and the damping to get the behaviour we desired.
​
This is what gave us the best results, but of course, every character has his own, and you will have to figure out for yourself.

​Once you have your dynamic bones as good as you want, that's it! There's no more to do. Now you can do whatever you want with it; using it on your game or scenes, or upload it to VRChat.
If you don't know how to do it, check out our guide about how to upload you avatars into VRChat
Conclusion

Dynamic bones are a simple yet super effective way to give life to your characters. With just a little bit of tweaking you can get really good results. Making you characters more dynamic and life like.
​
Moving clothes, hair, tails and eyes are just the beginning,
​your imagination is the limit here. Be creative!
Picture
​Pedro Solans
3D ANIMATOR
​​Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
Twitter
0 Comments

Developing Headquarters at Decentraland

3/17/2021

2 Comments

 
Premise
"Create, explore and trade in the first-ever virtual world owned by its users." With this welcoming sentence Decentraland invite us to dive into one of the first blockchain-backed platforms that enhances an alternate reality to express our ideas and projects into the world.
​
Launched in February 2020, Decentraland has seen its potential grow exponentially as different blockchain-related companies and projects have placed a headquarter or contact point to connect bridges with their audience among the metaverse.

The mission

Keeping this growing trend in mind, Nugget's News approached us to create a brand new, different social point where people could meet and engage with the company, focused mainly in learning and tracking trends in the blockchain world.

So the mission was to design a space where the people could learn about crypto, about the company, connect in different ways and attend events hosted in the building.
Resources

  • Unity Editor (2018.3.6f1)
    unityhub://2018.3.6f1/a220877bc173
 
  • Decentraland SDK
    https://docs.decentraland.org/development-guide/SDK-101/
​
  • Unity Decentraland Plug-in
    https://github.com/fairwood/DecentralandUnityPlugin
Developing Headquarters at Decentraland Nugget's News
Finding the headquarter purpose

Ahh, yes here we are at the start point of this journey. When it comes to develop buildings with a specific goal we must ask first what is going to be used for and the initial division of their space and how much space do we have. Two words: What and Where.

What does this building gathers?
​

Nugget's News had a clear vision on how its content had to be divided and placed:
  • 1st floor: General information about the company, news and social links.
  • 2nd floor: Learning about cryptocurrencies with supporting videos, images and out-links.
  • 3rd floor: An event area to host different kinds of event.
General information floor
Where is this building at?
​

Having our general info distribution by floors, we had at our disposal a 2x2 LAND state at Decentraland, these meant:
  • 32 meters length in both X and Z axis. 46 meters height (Y axis).
  • 40k tris for all the models present in scene.
  • 46 materials and 23 textures to color design our scene.
* Among other minor specs:
​
Max Scene Limitations ECS
Scene Limitations Decentraland
Initial ideas, visual conceptualization

The project came in to modernize a previous building designed for the same purpose, but the client wanted a more unique approach rather than a realistic building one.
Nugget's News old design
One thing I personally love about the metaverse is that the reality-rig is set but anyone but you, the creator.

What does this mean?
​
In Decentraland, architecture is not rigged by nature laws and the sky is the limit (46 m, pun intended).
When it comes to unleashing imagination. You don't have to care for concrete covers, pipes or electrical equipment. There is not such thing as supporting pillars or weight constrains and everything you make it has a purely esthetical reason.
​
Basically:
The special touch
Instead we play with geometrical constrains, shader limitations and runtime overloads. A different kind of game with similar functional results.

The resemblance to a real-life building is inevitable but we can give a special touch to the art-style of it instead to make it unique.
Following the need to modernize its appeal, for this project we played with the idea of having a hangout area, learning place and screening place in a building that had a sobriety but elegant design. We chose modernist architecture as a main reference and mixed it with the idea of a giant logo surrounding the building. 
Nugget's News new design
Nugget's News new design
Blockout, helpful space design

After the references have been set, it's key to start developing a mock-up or quick model that draws the volume boundaries of the building and mixes properly what we want (#00) and how we want it (#01) based on the purpose of the building and the references chosen. 
Nugget's News new design
This give us a proper shot to iterate on and we can start trying it out in-engine to match sizes. Also it gives us a scope on how many objects do we have to craft and anticipate some possible constrains based on our goals.
​
From this blockout we extracted specific references for chairs, counters and every visual needed. It was time to get into shapes.
Shape development

As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design.
Shape development
As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design.

After different iterations and redesigns we get to the final design. All running in engine.
Nugget's News final design
After we have all shapes made and initial composition set, it is time for us to go to the next step, design materials and apply them.
Material design and Animation execution

As we said before, the Nugget's News building look comes from a Modernist architecture with some art-touch made by us. This means that the building has to be mainly using "modernist-like" materials such as concrete, clean wood, painted steel, brushed grey iron and a little bit of marble.

But we cannot forget this is a corporate building, so we have to mix those ideas with matching colours from the colour palette of the Nugget's News website. This is basically a palette of 5 blues and a coral variant made for contrast with the rest of the components.
​
Both main ideas mixed together result in the following material (excl. Albedo dyes)
Material textures
​Of course, creating materials is not a one shot job, we have to iterate and local-deploy constantly to reach the result we want and at the same time play with all the constrains, this means that we have to cleverly reuse all the materials along the building and minimize those that can only be in one place or are too specific. Apply different materials in different polygons set in the same mesh and etcetera. 
Material design evolution
An example on material design evolution is the wall (and floor) of this room and how it evolves until it gets nailed.

​Regarding Animations, this is something I was kind of thrilled to see done, the idea with the Nugget's News immense logo being animated and morphing during the whole experience. We found pretty interesting that the shape of the logo has an almost continue tileable curvature, so we could play with it and invert the curvature as an animation, making the logo continuously loop non-stop by just morphing.
Logo concept
A concept made to understand how the logo should deform, it exploits the fact that it already has a "natural" curvature and inverts it seamlessly.

​Technically it is something difficult to get right as the rig has to be correctly executed to drag properly the vertex by animation with the simplest model complexity possible. But once we have it right and complete the animation, we can have our hands on the next step, color and UV mapping.
Nugget's News final design
UV mapping this was something we came as the NN shape came along, as the logo itself is made out of triangles this benefits us and we can make the whole texture with a two gradients: one for blue to light gradient triangles and other for blue to dark gradient triangles.
UV mapping
Determining info, labels and other communications

Once the composition and all the visual aspects of the scene are ready to evaluation and have been checked in-engine, it is time to get our hands on designing information areas. We now basically are creating a good user experience by deep thinking and assess how the design will be experienced and ensure the best experience. This means to place everything in a way that is not tedious for the player.
​
As an example, the second floor suffered a re-composition from scratch to ensure the purpose of the area was fullfiled. The area had to contain different smaller areas with specific content not directly related to the next to it. So the idea we followed was to create an environment that naturally guided the player from the Counter (0) to the (4) after accessing the floor using the Elevator.

​This information distribution can be seen in this project with the following screens:
UV mapping re-composition
Nugget's News welcoming area
A welcoming area, with out-links that take the user to other places related with the company, like Twitter or Web.
Nugget's News general info
Multiple panels that can be a calendar of events, a motif of the company or just general info about its activity.
Nugget's News collective shift
In the second floor is placed a second welcoming counter as the floor content is related but with a different mission, if we want the player to feel welcomed and understand what's going on this is the best way. Also there is again another link board so this connection is always close to the experience of the user.
Nugget's News panel info video
Different panels with different boards to explain specific materials and contents about the company are placed along the second floor in independent cubicles. 
Nugget's News panel info video
Each cubicle has its own "Learn more" so the player goes directly to the expanded content related to what it's being told.
Nugget's News panel info video
A mid-floor between Floor 2 and 3 was set to amplify the use space and maintain everything airy.
Nugget's News elevator
During the whole displacement along the levels, you can see where are you heading.
Nugget's News conference area
A general view of the conference area, spacious and set to gather big meetings with announcements or "Ted talks".
Nugget's News conference area
And a viewpoint on the same stage.
Conclusion

Once everything is in place and with the correct information set, there is nothing more to do but to upload our content to the metaverse and let the people experience it.
​
This project has given us a fantastic opportunity to create a headquarter that had no relation with our daily work, as we are involved indirectly with cryptocurrencies but we focus in 3D developments and experiences. I do really hope to get my hands again on such different things as building the worlds others imagine.
Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
twitter
2 Comments

Optimizing VRChat Worlds: Static Batching

3/10/2021

0 Comments

 
Premise

​VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.


The mission

This guide focuses on one of the best practices when developing videogame environments, the static batching.

We will discuss the value of static GameObjects and the main differences between the static options. Last we will check the Avatar Garden (100 Avatars world) and how it is set in scene so you have a reference point on it.
Resources
​
  • Unity Editor (2018.4.20f1)

​Please note that this is VRChat focused but is something Unity Engine related issue.
When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge.

Static Batching VRChat
What is the Static value

If a GameObject does not move at runtime, it is known as a static GameObject. If a GameObject moves at runtime, it is known as a dynamic GameObject.
Static GameObject menu
The static bool (or drop down) is the way we tell the engine to pre-compute calculations of the GameObject and make them remain still, this way the Unity saves on runtime and only focuses on them once, after that moves on and only focuses on dynamic objects that have their calculations in each frame.
​
By clicking its checkbox we will automatically check/uncheck all the Static types we have at our disposal. Unity will ask you, if you have children attached, to apply or not these values to all the hierarchy.
To apply only one type of static to an object is as easy as clicking on the dropdown arrow next to Static and mark all your desired values.
​
Static can make a huge difference in how performance behaves in Unity, is a 101 in optimizing your work and getting it running into potatoes. The whole Static well used is the secret for smooth fps.
Game view statistics
In the Game view we can find a little tab called "Stats" there we can see how many batches are being processed by frame in the current FOV of the Camera set in scene. Without Lightbakes, Occlusion Culling, Batching Static and others this could easily rise as every feature needs a single calculation by object.

This means (among others): calculate mesh visualization, apply material (1 batch per shader pass), lighting (if realtime 1 batch per object), realtime effects like particles, skybox... So you can get that Static-ing your objects is crucial, specially the one that tells the engine to don't modify anything from its render/transform: Static Batching.

Before we get our hands on Static Batching, we will unravel the different types and do a brief explanation and usage:
​
  • Static value:
    • Contribute to GI: Consider the GameObject when computing Lighting Data.
    • Occluder Static: Turn this GameObject an Static Occluder.
    • Occludee Static: Turn this GameObect an Static Occludee.
    • Batching Static: Consider this GameObject to merge its mesh/meshes with other GameObjects set to Batching Static.
    • Navigation Static: Consider this object when computing Navigation Data.
    • Off Mesh Link Generation: Attempt to generate an Off-Mesh Link. (Navigation Data)
    • Reflection Probe: Consider this object when rendering Reflection Probes.

You can find information about creating a good Occlusion Culling system in VRChat here:
  • Optimizing VRChat Worlds: Occlusion Culling
The Static Batching in the Gauguin Avatar Garden

The Static Batching tells the Engine to combine similar meshes in order to reduce its calling count. But what is similar for Unity?

Unity will combine all meshes that have the Static Batching checked and have the same materials, in order words, Unity engine combines by material.

This reduces notably the amount of times its calling to render a mesh without killing its independence as an object. Combines the mesh but still considers all the GameObjects and their respectively components (Transform, Physics...) but they won't move. So Animated objects are off the table as they are Dynamic Objects and should be performance improved using other ways.

As an example, here in the Avatar Garden we find that we have repeated along all the environment a set of assets that vary in Transform values but remain the same in all the others (Material settings), all these objects are marked as Static so Unity only renders:
​
  • Once all the "orange mesh" GameObjects.
  • Once all the "light-green mesh" GameObjects.
​
... and so on with every Material match between all the Static Batched GameObjects.
Static Batching Unity
The result of Static Batching led Unity to combine all the meshes that used the same Water material or combining all the meshes that used the ground material.
Water mesh on runtime
Ground mesh on runtime
If you animate a GameObject by its transforms and has the Static Batching checked it won't move on runtime. Meanwhile if you have an Animation based on shader values it will get animated. This means that Shader modification on runtime is still possible for Static Batched meshes.
Animation water mesh
The river in the Avatar Garden is an animation of the values of the Material Shader. The engine is told to interpolate two values in an amount of time and it does it through the Shader, this allow us to make this type of effects and at the same time maintain low the draw calling rate.
​
Also notice that the shadow doesn't get displaced during the animation, this is due to the UV space they are using. Standard shader uses the UV0 for input texture maps like Albedo, Metallic, Smoothness etc and so all the values modified during runtime only affect the UV0 channel of the model. On the other hand lightmaps make use of the UV1 and they are not modified during runtime.
UV maps used in Unity
For more information on how we created the river in the Avatar Garden, check our article about it!
  • Creating water rivers in VRChat
Reducing draw calls

To draw a GameObject on the screen, the engine has to issue a draw call to the graphics API (such as OpenGL or Direct3D). Draw calls are often resource-intensive, with the graphics API doing significant work for every draw call, causing performance overhead on the CPU side.
To reduce this impact, Unity goes for Dynamic batching or Static batching as we stated previously, good practices to reduce draw calls include:
​
  • Combine different objects that are going to share the same Material.
  • Pack different and independent objects in one bigger Atlas Texture.
Texture packed
Asset using texture meshes
This is an example of Atlas packing, all the objects from the exterior are using the same texture maps and so they will be draw called once.
​
The Avatar Garden uses tileable textures and so it reduces too the number of materials. For color depth and variation we used Lightbaked lighting.
Conclusion

The static value is a must in the improvement of performance during runtime in Unity. Well executed can heavily impact on all machines and can let people with low-end machines running high-end experiences.

You are more than welcome to drop any question or your try-out and results!
Join us: https://discord.gg/UDf9cPy
​

Additional sources:
https://docs.unity3d.com/Manual/GameObjects.html
https://docs.unity3d.com/Manual/StaticObjects.html
https://docs.unity3d.com/Manual/DrawCallBatching.html
Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
TWITTER
0 Comments

Optimizing VRChat Worlds: Collision Debugging

3/2/2021

0 Comments

 
Premise
VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.

The mission
Resources
This guide focuses on the Physics Debugger function of Unity and how to set up your 3D Colliders in scene and how to manage triggers in VRChat.

Having a simple colliding set up will help calculations on low-end machines as the physics interactions will be lower and simpler.
  • Unity Editor (2018.4.20f1)
  • VRChat SDK2
Please note that this is VRChat focused but is something Unity Engine related issue.
​
When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge.
Optimize VRChat worlds Collision Debugging tutorial guide
​What are Colliders

Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the same Object’s mesh.

Every time you get yourself into a game, the player moves or you have "physical" interactions with the environment such as dropping a glass or throwing a rock collisions are getting in to work and behaving as solids, triggers or gravity affected objects.

This sorcery works under default parameters but as in every game-engine you can set up colliders at your liking to match your desired interaction results. This definition could be applied to being able to walk on the firm ground but sink progressively as you go into muddy dirt or enter a flowing river.

​Colliders are basically interactions between objects that send messages to the engine to determinate if the object is colliding, and therefore can't proceed to go through it; or its triggering and then it can enter the other collider volume and (if set so) send an specific message.
​
colliders essentials in video games
​Here we could state that the player is walking along the map and decides to enter the river, this literal information could translate into colliders design in a way that define at which height the player is walking on and triggers that are modifying the player speed and turn it slower as it goes further into the river. This does indeed happen in our Avatar Garden environment:
colliders essentials in video games
​As the player will walk through the river, they will pass through the water mesh walking over the river soil instead
Types of Colliders

The three types of Collider interactions present in Unity are Static, Rigidbody and Kinematic.

Each has an specific use, for example Static colliders are for GameObjects that are not meant to move or are not affected by the Gravity. Rigidbody Colliders are meant to be for objects that have "forces" applied on them and therefore gravity (and other possible set up forces) affects them in each frame (unless they are in sleeping mode). Last but not least there are Kinematic Colliders meant to be for Kinematic bodies and are not driven by the physics engine, read more about kinematics here.
​
Colliders shapes in video games
​If we recreate this mockup in Unity and press Play, the engine will make the ball fall and the rock won't move.
​
To apply a 3D Collider component into an object, we have different options at our disposal:
​
  • Box Collider: The simplest and most used type of collider, a bounding box that creates a collision volume around the mesh. Is suitable for almost any type of collider interaction and is perfect for Trigger Volumes.​
Unity Colliders: Box collider
  • Sphere Collider: Perfect for round or almost spherical objects that have to roll or you want to keep that curvature collision without using a Mesh Collider.​
Unity Colliders: Sphere collider
  • ​Capsule Collider: For Cylinder objects, is like an -extruded from the middle- sphere and is good for characters and other objects that require roundness but need tall colliders.​
Unity Colliders: Capsule collider
  • ​Wheel Collider: Suitable for Torus-shaped objects like -the name itself says- wheels. Its use is focused to Racing games or vehicles that use wheels. Applies forces on it and makes it easy to configure a vehicle that runs over different types of soil or roads.​
Unity Colliders: Wheel collider
  • ​​Terrain Collider: Acts as Collider based on the data collected from the Terrain GameObject.​
Unity Colliders: Terrain collider
  • ​Mesh Collider: the perfect option for non-primitive shapes that require complex collision. The best quick workaround to make them simpler and better for the engine is to check the "Convex" toggle box that reduces the mesh collider to 256 tris max. Also this component comes handy when it comes to custom colliders created by ourselves in our modelling toolkit.​
Unity Colliders: Mesh collider
​By default the Mesh collider will pick the Mesh Filter assigned mesh as the Mesh to represent the Collider.

The Rigidbody component goes apart of the Collider component, giving independency and control over itself and acting as an "addon" for the static collider. It does also state if the interaction of the Collider Is Kinematic or not.
Unity Colliders: Rigidbody
​Applying a Collider to a GameObject is as easy as adding the component in the Inspector Window once you have your GameObject selected. It is located in Component/Physics/ but you can also search for it by using the keyword Collider.
Unity Colliders components
​What does the Physics Debugger

​After we set our colliders into scene, the best way to previsualize and correct colliders prior testing is the Physics debugger.

You will find this window located in Window/Analysis/Physics Debugger
Physics Debugger in Unity
This window will make the colliders overdraw over your meshes like if it was adding a layer of semi-transparent objects with a color that matches a type of Collider. Red for static, Yellow for trigger, Green for rigidbody and Blue for kinematic colliders.
Physics Debugger: Collision geometry
​Here you can check/uncheck to display the Collision Geometry and also you can be able to Mouse Select directly your GameObjects by their Collider Volume.
Physics Debugger in Unity
​This window will drop a couple of features to configure to help us out as much as possible to configure and size the colliders.

You can change the colours to match the best ones for you, change the transparency or set a random to create variation between them.
​
The Physics debugger is going to be your best friend to spot flaws in your physics prior playing or even after noticing errors while testing!
Triggers in VRChat

For anyone experienced enough in game development will know that in Unity to activate a trigger you need a C# script telling the engine what to do when one Collider Triggers another Collider. The Trigger bool in the Collider Components tells the physics engine to let the other colliders go through the triggered. This is not possible in VRChat due to the custom script limitations and so it manages trigger by its Event Handler. Just add the VRC_Trigger Script and the SDK will add the Event Handler.
VRChat trigger in Unity
From this point, programming in VRChat turns visual and no real code is needed. Just to be aware that some stuff changes from place and it turns more "Artist friendly".​
VRChat trigger in Unity
​To add a behaviour as a result of a trigger, just click Add in VRC_Trigger component and start configuring your interactions. There are so many that covering a general use of this Triggers is barely impossible. So yes, the sky is the limit. Just remember that this operations can impact performance badly if they turn out to be expensive to execute.​
Applying Colliders in the Gauguin Avatar Garden (100 Avatars)

Colliders in the Gauguin Avatar Garden by Polygonal Mind are a mix of Box Colliders and Mesh Colliders because we wanted to keep it simple and under our control on some other collider volumes. But that is not a clear reference to understand why is like this.

When you get your hands on colliders, the first question you have to ask yourself is:
Why I'm doing a Collider?
Followed by:
What is going to do my Collider?
VRChat 100 Avatars world
​This two questions are essential to keep your collision complexity as low as possible. As you will want to make the Physics engine as smooth as possible to avoid artifacts in the player collision.
​
Gameplay guides collisions. There is no reason to create a collider for every thing in scene. Instead think on how the player is going to play (or how you intend them to play).​
Collision Complexity in 100 avatars VRChat world
Collision Complexity in 100 avatars VRChat world
​The big box in the back is to avoid players from going out the scene, encaging the player is a good way to free them to climb whatever without thinking of getting to the point of breaking the scene.

Once again, one of the best practices in doing game development but this time on Colliders is doing the work by hand. Don't let the engine do the math for you without telling exactly what's doing. Evaluating the best suitable collider in each occasion will give you tighter control over the process of debugging.​​
Mesh Collider match shape vrchat
For example this tree logs doesn't use a Mesh Collider to correctly match their shape when the collider comes to work, but why? There is no reason to spend a complex collision here when the player will just want to notice that there is a log in their way but nothing else.​
Mesh Collider match shape vrchat
Another example on Collider design goes here, you dont need to create a collider for everything. If we would have decided to create a collider for each small rock, the player would notice little bumps when walking and would be very uncomfortable or at least it wouldn't match the playable vision we had. Instead the ground is a Mesh Collider of the same Ground Mesh and the grass is not collideable neither.
Collider design vrchat
​And the last practical examples we are showing here, I want to point out that our trees in the Avatar Garden have not collisions on the top. Because any player can reach the high tree tops and no primitive collider worked good for the curvature of our model; we decided to create a custom model just to fulfil this Mesh Collider need.
​
Other things that we decided to use Mesh Colliders where Bushes and medium-sized plants. This was because there was no form to use primitive shaped colliders for such shapeless vegetation. We tried to keep as simple as possible the shape of all the Mesh Colliders or activate the "Convex" option to reduce to 256 tris if it was higher.​
Simple Mesh Colliders in a rock

Conclusion

In collision, when it comes to game development, physics, or at least basic physics are the second stage of an environment development so keep them always in mind when building your worlds! They can be a truly game changer on how the experience is felt and enjoyed. Keep it simple but also keep it clever!

You are more than welcome to drop any question or your try-out and results!
Join us: https://discord.gg/UDf9cPy

Additional sources:
https://docs.unity3d.com/2018.4/Documentation/Manual/CollidersOverview.html
https://yhscs.y115.org/program/lessons/unityCollisions.php
https://docs.unity3d.com/2018.4/Documentation/Manual/RigidbodiesOverview.html


Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
twitter
0 Comments

Optimizing VRChat Worlds: Occlusion Culling

1/21/2021

0 Comments

 
Premise
VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
The Mission
Resources
This guide focuses on the Occlusion Culling function of Unity and how to improve the size of the scene by lowering texture sizing, hiding what can't be seen. Also we will talk about occlusion portals, although this scenario doesn't use any as its all outdoors with no enclosed areas.
  • Unity Editor (2018.4.20f1)
Please note that this is VRChat focused but is something Unity Engine related issue.
Optimizing VRChat worlds with Occlusion Culling
#00 What is Occlusion Culling

Occlusion culling is a process which prevents Unity from performing rendering calculations for GameObjects that are completely hidden from view (occluded) by other GameObjects.
Every frame, a camera perform culling operations that examine the Renderers in the Scene and exclude (cull) those that do not need to be drawn. By default, Cameras perform frustum culling, which excludes all Renderers that do not fall within the Camera’s view frustum. However, frustum culling does not check whether a Renderer is occluded by other GameObjects, and so Unity can still waste CPU and GPU time on rendering operations for Renderers that are not visible in the final frame. Occlusion culling stops Unity from performing these wasted operations.

https://docs.unity3d.com/Manual/OcclusionCulling.html
​

This is basically the core knowledge of the whole system. This technique is used to avoid doing the real time calculations of gameObjects that are not in the Camera frustrum. This improves framerate and events performance during runtime.
Occlusion Culling on Unity 3D
#01 How do you apply it to your scene

To begin creating an "occlusion area" you need to check the box "static" or just click on the drop down and toggle "OccluderStatic" and "OccludeeStatic". Another approach is to select the desired gameobjects and toggle option in the Occlusion Window on the Object Panel.
Occlusion Culling on Unity 3D
Occlusion Culling on Unity 3D
This tells the engine to consider the gameObject when calculating the Occlusion Data within its Occlusion Area (Unity considers the whole scene as a single area if you don't configure one prior bake).
Occluders and Occludees
The main difference between these two Occlusion concepts is pretty simple but its important to keep it in mind when building your scene occlusion areas and data.
​
  • An Occluder is an object that can hide another objects
  • An Occludee is an object that can be hidden from view by another object (Occluder). If you uncheck this, the object will be considered differently like if it was on another layer, not being hidden by other meshrenderers.
​
An example of Occludee toggle would be for larger objects like grounds that should be considered separately to ensure its always rendered.
#02 Culling Portals and Culling Areas

Culling Areas are "cube" shaped volumes that "group" all the gameObjects inside of it, being only rendered if the camera is placed inside the same area. This works well if you have multiple enclosed areas, for our case, Occlusion areas didnt make sense as the whole scene is enclosed without visual walls among it.
​
Occlusion Portals are for connecting two Occlusion Areas and so the Camera can render both areas by the Portal region area. The toggle Open option is for allowing or disallowing this conneciton.
More info: https://docs.unity3d.com/2018.4/Documentation/Manual/class-OcclusionArea.html
100 Avatars world in VRChat top view
top view of the 100 Avatars world
Occlusion Culling on Unity 3D
Occlusion Culling on Unity 3D
Occlusion areas and portals
#03 Alternatives to Unity's Occlusion Culling system

The Occlusion system uses a built-in version of Umbra. As any other system, it has its failures and improvements compared to other occlusion system engines. For other projects I personally have worked with Sector, an Asset Package found in the Asset Store that is very helpful and by the time I worked with it it was way better than the Unity's Umbra (more flexible settings as its main selling point).
​
Another thing to keep in mind is the use of shaders with an excess of passes. Each pass is a whole mesh calculation for the material to be rendered and so Materials with more than two passes can be problematic for lower platforms like mobile. I state two as a minimum because transparent materials require two passes, furthermore they require the renderer to render whats behind the mesh rendered with transparency so they are quite the hard limit for low platforms.
Copy of Batch example

Mesh render
1
Apply mat albedo
1
Apply mat transparency
1
Apply light
1

Please keep in mind that "static batching" meshes get combined during runtime by the unity engine and so reduce the "meshrender" batching but keep the mat batching.
#04 Occlusion in the Gauguin Avatar Garden

The whole scene is marked as "Static" as there are no dynamic objects to keep in mind (the water is animated through material [not shader]). This made the Occlusion set up "easy" and not very hard to set the first steps. Keep in mind the size of the Occluder box you want to set, the bigger, the less "accurate" it will be, but at the same time the data will be much smaller. Each project needs its own balance.


In this case for Gauguin we set the size to 1.5, meaning that the smallest "box" packing objects was of 1.5 units (meters) in x/y/z value.


The Smallest Hole float is the setting to tell the camera how big it has to be the "hole" in the mesh to start casting what is behind it. This is specially tricky on elements with small holes or meshes with complicated shapes.
​

The backface value is the value of directionality of a mesh to be rendered. The higher, the more "aggresive" the occluder will be, making the camera not compute meshes that are not facing towards the camera.
Occlusion Culling on Unity 3D
100 Avatars world in VRChat
100 Avatars world in VRChat
100 Avatars world in VRChat
Note that all the "black" shadows are objects that are not getting rendered as their lightbake remains on the mesh that is being rendered. Furthermore you can see the "area" that the camera is in with the correspondent portals. When there is none in scene Unity creates them for you. 
The best workaround is to always do it manually and never let the program do the math for you.
​
For the scene, the ground meshes were kept without the Occludee option as smaller avatars made it through the ground floor due to camera frustrum and its near clip (this cannot be changed as it how it goes in VRChat).
live action of occlusion culling in 100 Avatars world
A live action of how the occlusion is working
#05 cOcclunclusion

You may find Occlusion Culling easy to set up or even unnecessary!
​But the truth is that is a vital piece during the final stages of environment development as is the manager, loader and unloader, of all the visual aspects of the camera, ensuring an smooth experience while mantaining the quality levels desired and keeping hidden from view but not unloaded from scene to ensure fast-charging-fast-hidding.


Also each time you modify a gameObject property like transform or add/remove gameObjects from scene you should rebuild your Occlusion data as those gameObjects are still "baked" in the data.
​

Keep it in mind specially when working with large environments or low-specs platforms.
You are more than welcome to drop any question or your try-out and results!

Join us: https://discord.gg/UDf9cPy

Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Twitter
0 Comments

Create and upload a VRChat Avatar with blend shapes visemes - Remaster

12/16/2020

0 Comments

 
The Mission
VRChat uses blend shapes to detect phonemes via a microphone, and adjust your character mouth to the correct shapes, giving the impression that you character is talking.
Resources
  • MayaLT 2018
  • Unity 2018.4.20f1
How to do Blend Shapes Visemes for VRChat
Isn't it great when you talk with somebody online and you see his mouth moving while he talks?
It really add ups to the experience, specially in Virtual Reality.

That's what this is about.
Creating different shapes so you can see yourself talking when you look at a mirror.
​
It's the little touches that makes something good to something better.
Let's say you already have your model done, it's also rigged and skinned so its ready to go.
​But, you want to make some blend shapes because in-game they look neat and funny.
​
Well, let's make them!
​

First, we need to know how many blend shapes we need to make. VRChat uses 16 different blend shapes. These are:
  • Blink Both eyes
  • aa
  • ch
  • dd
  • ee
  • ff
  • ih
  • kk
  • nn
  • oh
  • ou
  • pp
  • rr
  • ss
  • sil (silence)
  • th
To make things easier in the future, I highly recommend always using the same prefix for each name, so later in Unity it's almost automatic. The prefix being vrc_v_blendshapename.
Different blend shapes visemes used in VRChat
This gives you a general idea of how I made the different shapes of the mouth depending on the viseme. Another thing to keep in mind is that even if vrc_v_sil doesn't change the shape whatsoever, you must change something regardless.
​
Now that we have every shape done, we will use the Shape Editor.
Open the Shape Editor in the sculpting tab in Maya. Or going to Deform>Blend Shape.
Autodesk Maya Deform Blend Shape
Now, select one shape that you created and then select the original model. Go to the Shape Editor and click on "Create Blend Shape". Repeat this with all the 16 shapes.
Autodesk MayaLT Shape Editor tab
Export and import

We have every shape ready, so now we will export all the package.
Select all the shapes, meshes and bones and go to export.
Be mindful of checking in the box of Animation and make sure that Blend Shapes is activated too, if it's not, it won't export correctly.
Autodesk Maya export blend shapes
Now write the name you want and export it.
Upload

You should have your Unity 2018.4.20f1 or whichever version VRChat uses already set up. If you don't, check this guide out made by my friend Alejandro Peño where he explains how to set it up:
Upload Avatars to VRChat Cross-Platform (PC and Oculus Quest).
With the character imported, we will add a new component called VRC_Avatar Descriptor.
Unity 3D VRC_Avatar Descriptor component
Now it will appear a couple of parameters you can edit.
Unity 3D VRC_Avatar Descriptor component
We are only going to modify 3 of them. View position, LipSync and Eye Look.
View Position

This parameter allows you to decide where the first person point of view is located. In other words, from where are you going to see inside VRChat.
It is a no-brainer that we should put the little indicator at eye level. As close as posible to the eyes.
Avatar view position in Unity for VRChat
Lip Sync

How can we make our character talk? With this option right here!
In mode, select Viseme Blend Shape.
Unity VRChat Viseme Blend Shape
Now it will appear a Face Mesh tab. Using the little circle on the right, you can select the mesh where the Blend Shape Visemes are stored. In this case, since it's all the same mesh, we only have one option.
Unity VRChat Viseme Blend Shape
Now we are talking (pun intended). Like I said before, putting the right names, makes our lives easier. Every single blend shape is in place. But just to be sure, give it a look.
Unity VRChat Viseme Blend Shape
Eye Look

If you have sharp eyes, you might have realized that blink was nowhere to be seen (These puns just keep coming). That's because we will use the Eye Look tab to configure it.
​
Click on Enable and a couple of options will appear.
Ignore the others and go to the Eyelids section and select the blendshapes option.
Unity VRChat Eye Look
Select once again the mesh where the BlendShapes are stored, and it will appear something like this.
Unity VRChat Blink Eye look
If something is not properly added, you can change it from here. Since we don't have only the Blink Blendshape states, we will leave Blink like it is and change the other 2 so they don't have any state at all. Like this:
Unity VRChat Eyelids
PRO TIP

Use the preview button to make sure that everything works correctly. You can even check all the other blendshapes if you want!
Once it's finished, you can upload the character like you usually do. Again, if you don't know how to do it, you can check this guide:
Upload Avatars to VRChat Cross-Platform (PC and Oculus Quest).
Conclussion

Blend shapes visemes are a great way to give life to your avatars in VRChat.
​I would 100% recommend using them in your future avatars.
Depending on the model it takes around 30 minutes to an hour
​to create all the shapes needed, and they look great.
​
It's a lot of fun making these, so give them a try!
Picture
Pedro Solans
3D ANIMATOR
​Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
Twitter
0 Comments
<<Previous

    Categories

    All
    Blender
    CryptoAvatars
    Decentraland
    Decentraland En Español
    Maya
    Metaverse
    Mixamo
    Morphite
    Substance Painter
    The Sandbox
    Totally Reliable Delivery Service
    Unity 3D
    Updates
    Vrchat

    Archives

    March 2022
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    December 2019
    October 2019
    September 2019
    August 2019
    June 2019
    May 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2016

    Picture
Home
Projects
Assets

Picture
Crypto
Friendly
Picture

Subscribe to get some 💚 in your inbox once in a while.

Follow us and your visit will never be forgotten!
Picture
Picture
Picture

 © 2015-2022 POLYGONAL MIND LTD. ALL RIGHTS RESERVED.
  • Home
  • Metaverse Builder
  • CryptoAvatars
  • MegaCube
  • Blog
  • Decentraland
  • Projects
  • Assets