Polygonal Mind
  • Home
  • SERVICES
  • Projects
  • Decentraland
  • Blog
  • Assets
  • / CryptoAvatars

Blog

Optimizing VRChat Worlds: Collision Debugging

3/2/2021

0 Comments

 
Premise
VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.

The mission
Resources
This guide focuses on the Physics Debugger function of Unity and how to set up your 3D Colliders in scene and how to manage triggers in VRChat.

Having a simple colliding set up will help calculations on low-end machines as the physics interactions will be lower and simpler.
  • Unity Editor (2018.4.20f1)
  • VRChat SDK2
Please note that this is VRChat focused but is something Unity Engine related issue.
​
When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge.
Optimize VRChat worlds Collision Debugging tutorial guide
​What are Colliders

Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the same Object’s mesh.

Every time you get yourself into a game, the player moves or you have "physical" interactions with the environment such as dropping a glass or throwing a rock collisions are getting in to work and behaving as solids, triggers or gravity affected objects.

This sorcery works under default parameters but as in every game-engine you can set up colliders at your liking to match your desired interaction results. This definition could be applied to being able to walk on the firm ground but sink progressively as you go into muddy dirt or enter a flowing river.

​Colliders are basically interactions between objects that send messages to the engine to determinate if the object is colliding, and therefore can't proceed to go through it; or its triggering and then it can enter the other collider volume and (if set so) send an specific message.
​
colliders essentials in video games
​Here we could state that the player is walking along the map and decides to enter the river, this literal information could translate into colliders design in a way that define at which height the player is walking on and triggers that are modifying the player speed and turn it slower as it goes further into the river. This does indeed happen in our Avatar Garden environment:
colliders essentials in video games
​As the player will walk through the river, they will pass through the water mesh walking over the river soil instead
Types of Colliders

The three types of Collider interactions present in Unity are Static, Rigidbody and Kinematic.

Each has an specific use, for example Static colliders are for GameObjects that are not meant to move or are not affected by the Gravity. Rigidbody Colliders are meant to be for objects that have "forces" applied on them and therefore gravity (and other possible set up forces) affects them in each frame (unless they are in sleeping mode). Last but not least there are Kinematic Colliders meant to be for Kinematic bodies and are not driven by the physics engine, read more about kinematics here.
​
Colliders shapes in video games
​If we recreate this mockup in Unity and press Play, the engine will make the ball fall and the rock won't move.
​
To apply a 3D Collider component into an object, we have different options at our disposal:
​
  • Box Collider: The simplest and most used type of collider, a bounding box that creates a collision volume around the mesh. Is suitable for almost any type of collider interaction and is perfect for Trigger Volumes.​
Unity Colliders: Box collider
  • Sphere Collider: Perfect for round or almost spherical objects that have to roll or you want to keep that curvature collision without using a Mesh Collider.​
Unity Colliders: Sphere collider
  • ​Capsule Collider: For Cylinder objects, is like an -extruded from the middle- sphere and is good for characters and other objects that require roundness but need tall colliders.​
Unity Colliders: Capsule collider
  • ​Wheel Collider: Suitable for Torus-shaped objects like -the name itself says- wheels. Its use is focused to Racing games or vehicles that use wheels. Applies forces on it and makes it easy to configure a vehicle that runs over different types of soil or roads.​
Unity Colliders: Wheel collider
  • ​​Terrain Collider: Acts as Collider based on the data collected from the Terrain GameObject.​
Unity Colliders: Terrain collider
  • ​Mesh Collider: the perfect option for non-primitive shapes that require complex collision. The best quick workaround to make them simpler and better for the engine is to check the "Convex" toggle box that reduces the mesh collider to 256 tris max. Also this component comes handy when it comes to custom colliders created by ourselves in our modelling toolkit.​
Unity Colliders: Mesh collider
​By default the Mesh collider will pick the Mesh Filter assigned mesh as the Mesh to represent the Collider.

The Rigidbody component goes apart of the Collider component, giving independency and control over itself and acting as an "addon" for the static collider. It does also state if the interaction of the Collider Is Kinematic or not.
Unity Colliders: Rigidbody
​Applying a Collider to a GameObject is as easy as adding the component in the Inspector Window once you have your GameObject selected. It is located in Component/Physics/ but you can also search for it by using the keyword Collider.
Unity Colliders components
​What does the Physics Debugger

​After we set our colliders into scene, the best way to previsualize and correct colliders prior testing is the Physics debugger.

You will find this window located in Window/Analysis/Physics Debugger
Physics Debugger in Unity
This window will make the colliders overdraw over your meshes like if it was adding a layer of semi-transparent objects with a color that matches a type of Collider. Red for static, Yellow for trigger, Green for rigidbody and Blue for kinematic colliders.
Physics Debugger: Collision geometry
​Here you can check/uncheck to display the Collision Geometry and also you can be able to Mouse Select directly your GameObjects by their Collider Volume.
Physics Debugger in Unity
​This window will drop a couple of features to configure to help us out as much as possible to configure and size the colliders.

You can change the colours to match the best ones for you, change the transparency or set a random to create variation between them.
​
The Physics debugger is going to be your best friend to spot flaws in your physics prior playing or even after noticing errors while testing!
Triggers in VRChat

For anyone experienced enough in game development will know that in Unity to activate a trigger you need a C# script telling the engine what to do when one Collider Triggers another Collider. The Trigger bool in the Collider Components tells the physics engine to let the other colliders go through the triggered. This is not possible in VRChat due to the custom script limitations and so it manages trigger by its Event Handler. Just add the VRC_Trigger Script and the SDK will add the Event Handler.
VRChat trigger in Unity
From this point, programming in VRChat turns visual and no real code is needed. Just to be aware that some stuff changes from place and it turns more "Artist friendly".​
VRChat trigger in Unity
​To add a behaviour as a result of a trigger, just click Add in VRC_Trigger component and start configuring your interactions. There are so many that covering a general use of this Triggers is barely impossible. So yes, the sky is the limit. Just remember that this operations can impact performance badly if they turn out to be expensive to execute.​
Applying Colliders in the Gauguin Avatar Garden (100 Avatars)

Colliders in the Gauguin Avatar Garden by Polygonal Mind are a mix of Box Colliders and Mesh Colliders because we wanted to keep it simple and under our control on some other collider volumes. But that is not a clear reference to understand why is like this.

When you get your hands on colliders, the first question you have to ask yourself is:
Why I'm doing a Collider?
Followed by:
What is going to do my Collider?
VRChat 100 Avatars world
​This two questions are essential to keep your collision complexity as low as possible. As you will want to make the Physics engine as smooth as possible to avoid artifacts in the player collision.
​
Gameplay guides collisions. There is no reason to create a collider for every thing in scene. Instead think on how the player is going to play (or how you intend them to play).​
Collision Complexity in 100 avatars VRChat world
Collision Complexity in 100 avatars VRChat world
​The big box in the back is to avoid players from going out the scene, encaging the player is a good way to free them to climb whatever without thinking of getting to the point of breaking the scene.

Once again, one of the best practices in doing game development but this time on Colliders is doing the work by hand. Don't let the engine do the math for you without telling exactly what's doing. Evaluating the best suitable collider in each occasion will give you tighter control over the process of debugging.​​
Mesh Collider match shape vrchat
For example this tree logs doesn't use a Mesh Collider to correctly match their shape when the collider comes to work, but why? There is no reason to spend a complex collision here when the player will just want to notice that there is a log in their way but nothing else.​
Mesh Collider match shape vrchat
Another example on Collider design goes here, you dont need to create a collider for everything. If we would have decided to create a collider for each small rock, the player would notice little bumps when walking and would be very uncomfortable or at least it wouldn't match the playable vision we had. Instead the ground is a Mesh Collider of the same Ground Mesh and the grass is not collideable neither.
Collider design vrchat
​And the last practical examples we are showing here, I want to point out that our trees in the Avatar Garden have not collisions on the top. Because any player can reach the high tree tops and no primitive collider worked good for the curvature of our model; we decided to create a custom model just to fulfil this Mesh Collider need.
​
Other things that we decided to use Mesh Colliders where Bushes and medium-sized plants. This was because there was no form to use primitive shaped colliders for such shapeless vegetation. We tried to keep as simple as possible the shape of all the Mesh Colliders or activate the "Convex" option to reduce to 256 tris if it was higher.​
Simple Mesh Colliders in a rock

Conclusion

In collision, when it comes to game development, physics, or at least basic physics are the second stage of an environment development so keep them always in mind when building your worlds! They can be a truly game changer on how the experience is felt and enjoyed. Keep it simple but also keep it clever!

You are more than welcome to drop any question or your try-out and results!
Join us: https://discord.gg/UDf9cPy

Additional sources:
https://docs.unity3d.com/2018.4/Documentation/Manual/CollidersOverview.html
https://yhscs.y115.org/program/lessons/unityCollisions.php
https://docs.unity3d.com/2018.4/Documentation/Manual/RigidbodiesOverview.html


Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
twitter
0 Comments

Optimizing VRChat Worlds: Occlusion Culling

1/21/2021

0 Comments

 
Premise
VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
The Mission
Resources
This guide focuses on the Occlusion Culling function of Unity and how to improve the size of the scene by lowering texture sizing, hiding what can't be seen. Also we will talk about occlusion portals, although this scenario doesn't use any as its all outdoors with no enclosed areas.
  • Unity Editor (2018.4.20f1)
Please note that this is VRChat focused but is something Unity Engine related issue.
Optimizing VRChat worlds with Occlusion Culling
#00 What is Occlusion Culling

Occlusion culling is a process which prevents Unity from performing rendering calculations for GameObjects that are completely hidden from view (occluded) by other GameObjects.
Every frame, a camera perform culling operations that examine the Renderers in the Scene and exclude (cull) those that do not need to be drawn. By default, Cameras perform frustum culling, which excludes all Renderers that do not fall within the Camera’s view frustum. However, frustum culling does not check whether a Renderer is occluded by other GameObjects, and so Unity can still waste CPU and GPU time on rendering operations for Renderers that are not visible in the final frame. Occlusion culling stops Unity from performing these wasted operations.

https://docs.unity3d.com/Manual/OcclusionCulling.html
​

This is basically the core knowledge of the whole system. This technique is used to avoid doing the real time calculations of gameObjects that are not in the Camera frustrum. This improves framerate and events performance during runtime.
Occlusion Culling on Unity 3D
#01 How do you apply it to your scene

To begin creating an "occlusion area" you need to check the box "static" or just click on the drop down and toggle "OccluderStatic" and "OccludeeStatic". Another approach is to select the desired gameobjects and toggle option in the Occlusion Window on the Object Panel.
Occlusion Culling on Unity 3D
Occlusion Culling on Unity 3D
This tells the engine to consider the gameObject when calculating the Occlusion Data within its Occlusion Area (Unity considers the whole scene as a single area if you don't configure one prior bake).
Occluders and Occludees
The main difference between these two Occlusion concepts is pretty simple but its important to keep it in mind when building your scene occlusion areas and data.
​
  • An Occluder is an object that can hide another objects
  • An Occludee is an object that can be hidden from view by another object (Occluder). If you uncheck this, the object will be considered differently like if it was on another layer, not being hidden by other meshrenderers.
​
An example of Occludee toggle would be for larger objects like grounds that should be considered separately to ensure its always rendered.
#02 Culling Portals and Culling Areas

Culling Areas are "cube" shaped volumes that "group" all the gameObjects inside of it, being only rendered if the camera is placed inside the same area. This works well if you have multiple enclosed areas, for our case, Occlusion areas didnt make sense as the whole scene is enclosed without visual walls among it.
​
Occlusion Portals are for connecting two Occlusion Areas and so the Camera can render both areas by the Portal region area. The toggle Open option is for allowing or disallowing this conneciton.
More info: https://docs.unity3d.com/2018.4/Documentation/Manual/class-OcclusionArea.html
100 Avatars world in VRChat top view
top view of the 100 Avatars world
Occlusion Culling on Unity 3D
Occlusion Culling on Unity 3D
Occlusion areas and portals
#03 Alternatives to Unity's Occlusion Culling system

The Occlusion system uses a built-in version of Umbra. As any other system, it has its failures and improvements compared to other occlusion system engines. For other projects I personally have worked with Sector, an Asset Package found in the Asset Store that is very helpful and by the time I worked with it it was way better than the Unity's Umbra (more flexible settings as its main selling point).
​
Another thing to keep in mind is the use of shaders with an excess of passes. Each pass is a whole mesh calculation for the material to be rendered and so Materials with more than two passes can be problematic for lower platforms like mobile. I state two as a minimum because transparent materials require two passes, furthermore they require the renderer to render whats behind the mesh rendered with transparency so they are quite the hard limit for low platforms.
Copy of Batch example

Mesh render
1
Apply mat albedo
1
Apply mat transparency
1
Apply light
1

Please keep in mind that "static batching" meshes get combined during runtime by the unity engine and so reduce the "meshrender" batching but keep the mat batching.
#04 Occlusion in the Gauguin Avatar Garden

The whole scene is marked as "Static" as there are no dynamic objects to keep in mind (the water is animated through material [not shader]). This made the Occlusion set up "easy" and not very hard to set the first steps. Keep in mind the size of the Occluder box you want to set, the bigger, the less "accurate" it will be, but at the same time the data will be much smaller. Each project needs its own balance.


In this case for Gauguin we set the size to 1.5, meaning that the smallest "box" packing objects was of 1.5 units (meters) in x/y/z value.


The Smallest Hole float is the setting to tell the camera how big it has to be the "hole" in the mesh to start casting what is behind it. This is specially tricky on elements with small holes or meshes with complicated shapes.
​

The backface value is the value of directionality of a mesh to be rendered. The higher, the more "aggresive" the occluder will be, making the camera not compute meshes that are not facing towards the camera.
Occlusion Culling on Unity 3D
100 Avatars world in VRChat
100 Avatars world in VRChat
100 Avatars world in VRChat
Note that all the "black" shadows are objects that are not getting rendered as their lightbake remains on the mesh that is being rendered. Furthermore you can see the "area" that the camera is in with the correspondent portals. When there is none in scene Unity creates them for you. 
The best workaround is to always do it manually and never let the program do the math for you.
​
For the scene, the ground meshes were kept without the Occludee option as smaller avatars made it through the ground floor due to camera frustrum and its near clip (this cannot be changed as it how it goes in VRChat).
live action of occlusion culling in 100 Avatars world
A live action of how the occlusion is working
#05 cOcclunclusion

You may find Occlusion Culling easy to set up or even unnecessary!
​But the truth is that is a vital piece during the final stages of environment development as is the manager, loader and unloader, of all the visual aspects of the camera, ensuring an smooth experience while mantaining the quality levels desired and keeping hidden from view but not unloaded from scene to ensure fast-charging-fast-hidding.


Also each time you modify a gameObject property like transform or add/remove gameObjects from scene you should rebuild your Occlusion data as those gameObjects are still "baked" in the data.
​

Keep it in mind specially when working with large environments or low-specs platforms.
You are more than welcome to drop any question or your try-out and results!

Join us: https://discord.gg/UDf9cPy

Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Twitter
0 Comments

Create and upload a VRChat Avatar with blend shapes visemes - Remaster

12/16/2020

0 Comments

 
The Mission
VRChat uses blend shapes to detect phonemes via a microphone, and adjust your character mouth to the correct shapes, giving the impression that you character is talking.
Resources
  • MayaLT 2018
  • Unity 2018.4.20f1
How to do Blend Shapes Visemes for VRChat
Isn't it great when you talk with somebody online and you see his mouth moving while he talks?
It really add ups to the experience, specially in Virtual Reality.

That's what this is about.
Creating different shapes so you can see yourself talking when you look at a mirror.
​
It's the little touches that makes something good to something better.
Let's say you already have your model done, it's also rigged and skinned so its ready to go.
​But, you want to make some blend shapes because in-game they look neat and funny.
​
Well, let's make them!
​

First, we need to know how many blend shapes we need to make. VRChat uses 16 different blend shapes. These are:
  • Blink Both eyes
  • aa
  • ch
  • dd
  • ee
  • ff
  • ih
  • kk
  • nn
  • oh
  • ou
  • pp
  • rr
  • ss
  • sil (silence)
  • th
To make things easier in the future, I highly recommend always using the same prefix for each name, so later in Unity it's almost automatic. The prefix being vrc_v_blendshapename.
Different blend shapes visemes used in VRChat
This gives you a general idea of how I made the different shapes of the mouth depending on the viseme. Another thing to keep in mind is that even if vrc_v_sil doesn't change the shape whatsoever, you must change something regardless.
​
Now that we have every shape done, we will use the Shape Editor.
Open the Shape Editor in the sculpting tab in Maya. Or going to Deform>Blend Shape.
Autodesk Maya Deform Blend Shape
Now, select one shape that you created and then select the original model. Go to the Shape Editor and click on "Create Blend Shape". Repeat this with all the 16 shapes.
Autodesk MayaLT Shape Editor tab
Export and import

We have every shape ready, so now we will export all the package.
Select all the shapes, meshes and bones and go to export.
Be mindful of checking in the box of Animation and make sure that Blend Shapes is activated too, if it's not, it won't export correctly.
Autodesk Maya export blend shapes
Now write the name you want and export it.
Upload

You should have your Unity 2018.4.20f1 or whichever version VRChat uses already set up. If you don't, check this guide out made by my friend Alejandro Peño where he explains how to set it up:
Upload Avatars to VRChat Cross-Platform (PC and Oculus Quest).
With the character imported, we will add a new component called VRC_Avatar Descriptor.
Unity 3D VRC_Avatar Descriptor component
Now it will appear a couple of parameters you can edit.
Unity 3D VRC_Avatar Descriptor component
We are only going to modify 3 of them. View position, LipSync and Eye Look.
View Position

This parameter allows you to decide where the first person point of view is located. In other words, from where are you going to see inside VRChat.
It is a no-brainer that we should put the little indicator at eye level. As close as posible to the eyes.
Avatar view position in Unity for VRChat
Lip Sync

How can we make our character talk? With this option right here!
In mode, select Viseme Blend Shape.
Unity VRChat Viseme Blend Shape
Now it will appear a Face Mesh tab. Using the little circle on the right, you can select the mesh where the Blend Shape Visemes are stored. In this case, since it's all the same mesh, we only have one option.
Unity VRChat Viseme Blend Shape
Now we are talking (pun intended). Like I said before, putting the right names, makes our lives easier. Every single blend shape is in place. But just to be sure, give it a look.
Unity VRChat Viseme Blend Shape
Eye Look

If you have sharp eyes, you might have realized that blink was nowhere to be seen (These puns just keep coming). That's because we will use the Eye Look tab to configure it.
​
Click on Enable and a couple of options will appear.
Ignore the others and go to the Eyelids section and select the blendshapes option.
Unity VRChat Eye Look
Select once again the mesh where the BlendShapes are stored, and it will appear something like this.
Unity VRChat Blink Eye look
If something is not properly added, you can change it from here. Since we don't have only the Blink Blendshape states, we will leave Blink like it is and change the other 2 so they don't have any state at all. Like this:
Unity VRChat Eyelids
PRO TIP

Use the preview button to make sure that everything works correctly. You can even check all the other blendshapes if you want!
Once it's finished, you can upload the character like you usually do. Again, if you don't know how to do it, you can check this guide:
Upload Avatars to VRChat Cross-Platform (PC and Oculus Quest).
Conclussion

Blend shapes visemes are a great way to give life to your avatars in VRChat.
​I would 100% recommend using them in your future avatars.
Depending on the model it takes around 30 minutes to an hour
​to create all the shapes needed, and they look great.
​
It's a lot of fun making these, so give them a try!
Picture
Pedro Solans
3D ANIMATOR
​Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
Twitter
0 Comments

How to Rig and Prepare your Avatar for the Metaverse using Maya

12/16/2020

0 Comments

 
The Mission

What do we have inside? Bones. So do all of our (and yours too) Avatars. But, how do we put them there? And more importantly, what can we do to adjust them to every posible humanoid avatar?
​Let's find out!
Resources

  • Maya LT 2018
  • Your favourite internet browser
Rig your avatar for the Metaverse using Maya by Polygonal Mind
Getting the rig

Since we want humanoid avatars, the best way to get a fast rig is using Mixamo.
Mixamo is an automatic rigging website tool that allows you to create quick humanoid for free.
I won't cover how to use Mixamo, since we already have that cover in this post here:
https://www.notion.so/polygonalmind/Fix-and-reset-your-Mixamo-rig-Pedro-eccd01b2095545749e0a3d2a3e573558

​But I will explain how to use all the tools that I used when rigging almost every of the +200 different avatars we have made for the 100 Avatars project.
So tag along, because the world of rigging is one where patience is KEY.
Avatar imported

You have the avatar on your Maya proyect ready now.

​There are a few places where you have to take a closer look since these are the most problematic areas. These areas are shoulders, armpit and hands. Depending on the character you might have to take a look at other places, specially if it is a complex character.
Bones too big?

Maybe super small?
​
Go to Display>Animation>
​Joint Size
and you can change it there!
Maya Rig Joint Size
Maya Rig character
Ask yourself; are all the bones where they are supposed to be?
In this case... no. Using the X-Ray Bones options you can easly see where each bone is inside the body.
Maya Rig X-Ray Bones
In this case, the shoulders aren't where they should be, so, how can we move them?
Maya rig
With a really useful tool called Moved Skinned Joints.

Go to the Rigging tab, and then to Skin. Almost at the bottom, you should find the tool. Click on the square on the right and then on any joint. Now you can move them freely without any problem!
Maya skin rigging
Use it to move the shoulders where they should be.
maya rigging
Now it's time to skin!
Maya paint skin weights
Open the Paint Skin Weights on the same skin tab as before.
​
Be sure to click the square on the right.
​
If you had your mesh selected now all should appear black and white, representing the different areas where each bone has influence on the mesh.
It will also appear this new window.
​
On the upper part you can see each bone of the rig. Everytime you click on one, you will see the influence of the bone in the mesh.
​
Every bone has a lock on the left, this serves to "freeze" the bone and its values so it never changes. Super usefull.
​
Right below that you can select different modes.
  • Paint: The default option where you can paint influences which you set the value down below.
  • Select: Select any vertex, face or edge so you can only paint those regions.
  • Paint Select: Both in one
​
Then there's paint operation:
  • Replace: Replaces values for the current value.
  • Add: ADD the value to the current value.
  • Scale: Same as add, but scaling.
  • Smooth: Allows you to smooth the influence between different values so the change is not that sudden.
Maya rigging
Maya rig skin weight
Profile allows you to select what kind of brush you want to use. The currently selected simply put the values on every vertex it touches.
The other ones simply scales down the values the more you are on the extreme of the brush.
Value is the value you set the influence of the brush. More means more influence. The maximum is 1 and the least is 0.
​
The Flood button makes every selected region get the value you selected.
maya rig brush
Lastly, you can select how big the brush size is by pressing and holding the B key on the 3D Viewer.
With this explanation of the tools, you should have a good idea of how to skin a character inside Maya.
​
Now, skinning is not an easy thing, at least to make it right. It requires a lot of patience. A couple of advices I can give are, try to use the 1 value as little as you can. You should also use the smooth option since it really is unvaluable. Dont be scared of rotating bones. Aim to get the cleanest breaking point in your mesh.
Conclusion

Remember to check those zones I wrote about earlier and have fun! Skinning is an important process and takes time. The more you practice the better you will become!
What now?
If you want to see what's the next step, read my post about how to make visemes for your avatar and configure it inside Unity!
https://www.polygonalmind.com/blog-posts/create-and-upload-a-vrchat-avatar-with-blend-shapes-visemes-remaster

Picture
Pedro Solans
3D ANIMATOR
Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
Twitter
0 Comments

Creating water rivers in VRChat

8/7/2020

0 Comments

 
Premise
With the continuous rising popularity of VRChat and its standardisation as a virtual meeting point, Polygonal Mind contributed to its growth by creating 100 avatars available in the platform. This took the team to think about a "room" in the virtual world of VRChat where you could hang out with friends and change your avatar to one of our creations.
Creating water rivers for VRChat
The mission

Creating animated flow effects is not a difficult issue to achieve if you organise your project correctly in order to simplify resources and maximise its utility.
Resources

  • Maya LT 2019
  • Unity 2017.4.28f1
  • VRChat SDK
  • Photoshop CC
*Please note that the SDK has been recently updated and so the specifications have changed to Unity 2018.4.20f1. This change has not a direct impact in this guide.
Background of the issue

Creating a visually impressive scenario need the use of a high dose of creativity. And so Laura Uson lead the visual development of the scene, taking the impressionist painter Gauguin as the main inspiration for our environment. The vivid colors of the paintings and the simplicity of the shape direction and vague definition is one of the main highlights of our environment.
gauguin painting
Conceptualisation of the environment, shape simplicity and colour usage.
gauguin painting
General composition reference, shape definition guidelines and colour palette
Avatar garden vrchat shore
Vrchat world Avatar Garden
Late development shots
Vrchat world Avatar Garden
In this project I took the lead in general composition and technical development, this meant to be in charge of all the optimisation and general look alike by compositing and creating a good looking environment by level design, lighting and colour harmony. And one of those tasks to perform was to have a water flowing among all the static-ish environment.
​
To do so we heavily relied on Seamless textures getting animated to simulate water flowing.
#01 - Seamless textures can create flowing rivers

It is true that seamless textures are a great way to achieve massive texture distribution without feeling that there is some kind of repetition grid and everything looks overall uniform. This type of textures also work great to make flowing rivers or fluid movement. Said so make shape direction your main ally for flow movements, think about a "starting quad" and an "ending quad" where water will flow along the loop.
Seamless textures as flowing river
The 3D quad adaptation to an square texture also gives you the ability to shape the “stress” of the water flow by enlarging or stretching its area in the UV space. If you stretch the quad in the UV space you will get a quicker and stressed water flow. On the other hand if you enlarge its length, you will get a calm and slower flow of the water path.
​
This technique can be applied to a wide range of standard water flows like broken pipes or waterfalls.
Seamless textures as flowing river
In order to achieve this, we created a seamless square texture along the Y/V axis so the river would follow this pattern vertically, and so the animation goes from a Texture tile offset from 0 to 1 during X time. This way your "simple trick" animation will be always flowing seamlessly.
​
Although in the previous texture tile preview you can notice the "square" repetition due to small details in the texture, this cannot be appreaciated in the river mesh, as its quads only match the X/U vertical straight boundaries, while the horizontal edges mantain an unfolded freedom. This makes the texture to not necessarily match the whole square as its texel density is uniform along all the mesh. Of course this was our desired approach, you can always match the full square for different visual appealing results.
Flowing river in vrchat Avatar's garden
#02 - Seamless textures can create flowing oceans

The river came firstly, as its a simple UV animation logically made and thought, but we faced that the ocean couldn't be animated the same way as its flow doesn't behave the same way. The main issue was to "imitate" and do a visual ressemble of the ocean waves in a calmed way, without creating wave meshes. The attained look was inspired by the following Gauguin painting:
Gauguin ocean as vrchat world: avatars garden
From the picture from above we can already see how the ocean shape and look a like was made, each "wave" was made by creating a river that went along the shore to the limits of the map, creating a water flow that ressembled to the sea.
​
At this point you can already guess that you cannot animate it the same way as it is animated the river, in this case the animated axis is the X/U axis, moving the foam closer to the shore.
Avatars garden ocean in vrchat
This could be achieved by doing a seamless connection between all the faces, this time each quad occupies the whole UV region, creating a seamless mesh deformed by its own mesh. The stress points now are made when there are more (and smaller) polygons instead of smaller UVs.
​
And that's all about the project breakdown regarding this particular environment effect and animation. The next two points are going to be focused on creating this effect step by step.
#P1 - Ok I made a river, How do I create proper UVs?

Once you have the initial River mesh you will probably have a UV mess. Because how Maya behaves, UVs get deformed as the Maya engine performs the commanded operations. My recommendation is to delete all the UVs and select the path loop the river will follow, you should get something like this by using a contour stretch:
UVs for flowing river
As you can see, with the Maya History deleted and the shape Frozen, an one-opened loop will be displayed like this when performing a Contour Stretch. You can say that it still doesn't look right (because the whole mesh has been stretched to the coordinates 0,1 in the UV axis) but it has the water flow direction already. From this point the best forward step to do is to scale in the V axis the whole UV shell until finiding the desired texture stretching.
UVs for flowing river
Remember rotate the sheel and align the last quad of the river as the highest point in the UV coordinates, this way the animation value towards a positive axis will get displayed properly. Otherwise the river would be climbing up the waterfall.
With the UV length expanded along the V axis you now have a proper seamless water flow in a river, good job! Now let's take a look to the next point.
#P2 - Now I want it to flow, Animation.

One of the many benefits of Unity is its simplicity when it comes to animation. Unity is an engine capable of animating through its animation engine almost any public property or value displayed in the inspector of a gameObject in scene. All you need is to create an Animation Controller in one of your Assets folder, create a default State and create an Animation clip assigned to it.
Animating a flowing river in Unity
Once you have done this, you can start to create and edit your water animation in the Animation Window (CTRL+6).
For this river I set two keyframes, one at the start point of zero frames, and the other at the frame 120 (2 seconds@60fps). The values are zero and one respectively.
Animating a flowing river in Unity
Because you are animating a texture tile, when the texture offset reaches 1 it will be at the starting point (visually). It would be like having an sphere rotating 360 degrees, when it does a full turn around, being at 360 in the Y axis is like being at the starting point of 0.
Animating a flowing river in Unity
The interpolation between the start point and the end point can be done however you want to make feel like the water, but I recommend doing a lineal interpolation as it wont have any moment of frame acceleration or deceleration and the "speed" part will always be performed by the UV space in the texture. This approach will make it look natural.
Animating a flowing river in Unity
In the Unity Editor, the affected value by the animation clip are highlighted in red. This is pretty useful as it indicates that the value will always be override by the incoming animation clip on play.
With all these Unity basics you will have a beautiful (and maybe simple but effective) water flow. From this point everything is ad-on and there are infinite ways to keep experimenting and improving this workflow, it will always depend on the artistic and visual approach you are trying to get.
#04 Conclusion

With all the things said, I will pack my tools and do my last paragraph as a farewell. This was my very first time doing a river flowing, sea waves approaching and a waterflow being alive, but I have been already experimenting with texture offsets for a wide range of other applications. Because it's more important to know the useful basics and tech tricks instead of very particular cases of technology and graphics applications. Experimentation is key in this world!

That's all you need to start experimenting with simple water flows, there is no real need to use Particle Systems or complex Shaders, with the correct texture and clever usage of UVs you can achieve amazing results that at the same time will do a less impact in performance.
​
You are more than welcome to drop any question or your try-out and results!
Join us:
https://discord.gg/UDf9cPy

Picture
Kourtin
ENVIRONMENT ARTIST
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Twitter
0 Comments

Creating Worlds from Paintings

7/14/2020

0 Comments

 
The Mission

Here in the Polygonal Mind team we always try to reach new heights when presentings our visuals. Our latest environment which we created on VRChat is a testament of that.
​
On this article I wanted to talk more about how we developed this idea to create the world and from where it comes.
Resources

  • An Artist you like
  • The software of your preference
Creating worlds from paintings
Bringing 2D into 3D

Momus Park was the first time we tried this with surprising results. For those who don't know, the majority of the textures used are based on the The Starry Night from Van Gogh. Trying to imitate the spirals and the brush strokes from the sky and the city on the models.
Momus park in decentraland
Momus Park
In this project, we took these ideas and methods a step further, by not only trying to recreate the texture but also the looks inside the painting. Creating in that way a world in which wherever you went or looked, you experienced the sensation of being inside a painting, uniting 2D and 3D.
​
Actually, this is not a new concept, however most of the 3D works that try this method are static images or videos, an impressive technique and design without a doubt but it's a shame that you can't go around the world they created.
Where to start

Once we have limited what we want to do and the feeling we want to get, it's time to gather references and styles.

In this case, we were looking for artists that had painted open aired spaces and gardens, since the main theme of this world it's called Avatar Garden. Meaning that the player have to have places to walk throught, select the different avatars and relax zones where the players can meet with other people.
​
The other condition that we put ourselves was that it had to be a classical painting, since we were connecting the 3D and the 2D, we thought it will be interesting to join a classic art medium with a new one. Putting this two things together inmediatly gave us the thought of the impresionist movement, with artists such as Claude Monet or Édouard Manet.
Un bar aux Folies Bergère - Édouard Manet
Un bar aux Folies Bergère - Édouard Manet
Water Lilies - Monet Series
Water Lilies Monet Series
This style has a lot characteristics that make it ideal for this, the textures have a lot of personality with a strong presence of brush strokes and spots of colours, making it really easy to create tileable textures. Finally after reviewing a few artists, we decanted ourselves for Gauguin and his tahitian landscapes.
Gauguin painting
Gauguin painting
Take notes

Once we have an artist selected, it's important to take notes. Each artist has a particular style that took years to develop, and if we want to recreate their paintings or artworks, it's a must to try to imitate these details. In this case we can pinpoint a few things from the get go that will helps us in order to recreate it:
​
  • the colors are saturated and very strong (red, greens and yellows predominate in the scene)
 
  • the brush stroke is visible, characteristic of the impresionist movement.
 
  • the shapes are represantive, not a literal representation of the object (as the impresionists said, recreate the impression the object left)
 
  • the cool colors are used very scarcely and in zones that is inevitable their use, for example the sky or the darker parts of the jungle.
​
When we have these main points clear it's time to prepare the props and the assets. To get started, we created some of the most basics props to fill a simple scene. That way we can define more details and polish the models even more.
3D trees based on paintings
Original models based on the painting on the right
Gauguin painting
This also helped us to develop a pipeline that allowed us to work more efficiently, which more or less consists of this:

  • create a list of models and textures you will need.
 
  • try to use tileable textures, it will be preferible to use that as base for the models, for example in this world, we created a single texture with colour variants from green, red and yellow.
 
  • create the models and try to reuse the textures, if a texture is required, ask yourself if it will be possible to repeat it, if it's possible make the texture tilebable so that it can be reused, if not, paint the model in Substance to recreate the strokes more faithfully.
 
  • we can use the transparencies to add more details on the models and recreate the texture of the bushes and trees (more of that in the next point).

Finally, in the spirit of Gauguin, we decided to change the world to an island instead of a garden.
Improvise, Adapt and Overcome

Creating this pipeline gave us an idea about what we will need to make, however it's not as easy as that. As always happens in these kind of projects, we found a series of problems. Especially when adapting some of the assets into the paintings style.

There were times when we had to reject some models because even though the final result may look impressive, the time spend on it, the resources it took or the number of tris, made the process not worth it.
​
For example this tree below (reference on the left), to recreate it we put a base on the leaves with a green texture, and them we recreated the yellow strokes with transparent planes. On the right you can see the tree in the process. Sadly we had to drop it because of the reasons stated above.
Gauguin painting
original tree
3D tree based on painting
tree in 3D
Other example comes from the vegetation, in particular from the bushes, since the forms are not very detailed, if took a lot of tries until we achieved a satisfying shape, it was a difficult time to make because a lot of times they resembled deformated spheres. Since this was a necessary model, we repeated the model until we achieved the desired result.
3D bushes based on painting
Final shape of the bushes
The most important lesson from this point is that not everything comes as one imagines it, so when you arrive to this, is important to stop and think about it. It is necessary? it is taking a lot of your time? how can I change it? This is normal, so it's important to improvise, adapt and overcome.
Improvise. Adapt. Overcome.
If you keep going you can finally obtain the desired results, as you can see:
100 Avatars world in vrchat
100 Avatars world in vrchat
100 Avatars world in vrchat
100 Avatars world in vrchat
Conclusion

While working on this project we discovered that it was a good way to test our limits, and what we can do inside an enviroment project. We tried to recreate a paradise that Gauguin viewed when he first traveled to Tahiti. All in all, I hope to see you there.
Vacation cat

Picture
Laura Usón
3D ARTIST
Passionate about videogames, movies and creatures. artist by day and superhero at night.
Cent
0 Comments
<<Previous

    Categories

    All
    Decentraland
    Decentraland En Español
    Maya
    Metaverse
    Mixamo
    Morphite
    Substance Painter
    The Sandbox
    Totally Reliable Delivery Service
    Unity 3D
    Updates
    VRChat

    Archives

    February 2021
    January 2021
    December 2020
    October 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    December 2019
    October 2019
    September 2019
    August 2019
    June 2019
    May 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2016

    Picture
Home
Projects
Assets

Picture
Crypto
Friendly
Picture

Subscribe to get some 💚 in your inbox once in a while.

Follow us and your visit will never be forgotten!
Picture
Picture
Picture

 © 2015-2020 POLYGONAL MIND LTD. ALL RIGHTS RESERVED.
  • Home
  • SERVICES
  • Projects
  • Decentraland
  • Blog
  • Assets
  • / CryptoAvatars