Adding the bones
Of course, the eyes won't move by themselves, they need a bone that will make them bounce. I'm sure you followed our rigging tutorial to easily rig your character with mixamo and fix any nasty problem, if not, be sure to check it out here:
Fix and reset your Mixamo avatar rig
Using the Create Joints tool, add your bones wherever you want. Make the bone chains as long as you want so it looks as smooth as posible.
Want to give your character even more personality? Use blend shapes visemes to add facial expresions while talking. You can easily follow our guide here
Create and upload a VRChat Avatar with blend shapes visemes
Now, export your character making sure you have the skin weight correct and the skin checker box ticked.
Time to bounce
Next stop, Unity.
Be sure to have the Dynamic Bones asset installed in your project because it's what we need to be able to move the new bones.
Check if everything is correct and the skin weight is working properly.
Drag and drop the DynamicBones.cs script onto your character mesh or add a new component on the inspector tab. Time for some tweaks.
By default, Dynamic Bones gives pretty good results for the bones to interact with meshes and being affected by gravity, but my case is a little bit special, and we will have to adjust it correctly.
First of all, we need to assign which bones we want to be dynamic. For that, we will select the "Root Bone", that is, the bone before all of our dynamic bones.
Test, test, test. Move your character. Rotate it. Make sure it does what you want. You can get a lot of different effects by just adjusting a couple of parameters.
This is definetly not what we want. While the eyes move accordignly on the Y and X axis, we don't want to move in the Z axis.
Eyes in place, but we need to tweak how the eyes move and behave when moved. These are all the options in the image below.
Knowing what each option do, now is time to test. Tweak some settings and try by yourself.
If you don't know how to do it, check out our guide about how to upload you avatars into VRChat
Dynamic bones are a simple yet super effective way to give life to your characters. With just a little bit of tweaking you can get really good results. Making you characters more dynamic and life like.
Moving clothes, hair, tails and eyes are just the beginning,
your imagination is the limit here. Be creative!
Getting the rig
Since we want humanoid avatars, the best way to get a fast rig is using Mixamo.
Mixamo is an automatic rigging website tool that allows you to create quick humanoid for free.
I won't cover how to use Mixamo, since we already have that cover in this post here:
But I will explain how to use all the tools that I used when rigging almost every of the +200 different avatars we have made for the 100 Avatars project.
So tag along, because the world of rigging is one where patience is KEY.
You have the avatar on your Maya proyect ready now.
There are a few places where you have to take a closer look since these are the most problematic areas. These areas are shoulders, armpit and hands. Depending on the character you might have to take a look at other places, specially if it is a complex character.
Ask yourself; are all the bones where they are supposed to be?
In this case... no. Using the X-Ray Bones options you can easly see where each bone is inside the body.
In this case, the shoulders aren't where they should be, so, how can we move them?
With a really useful tool called Moved Skinned Joints.
Go to the Rigging tab, and then to Skin. Almost at the bottom, you should find the tool. Click on the square on the right and then on any joint. Now you can move them freely without any problem!
Use it to move the shoulders where they should be.
Now it's time to skin!
Value is the value you set the influence of the brush. More means more influence. The maximum is 1 and the least is 0.
The Flood button makes every selected region get the value you selected.
With this explanation of the tools, you should have a good idea of how to skin a character inside Maya.
Now, skinning is not an easy thing, at least to make it right. It requires a lot of patience. A couple of advices I can give are, try to use the 1 value as little as you can. You should also use the smooth option since it really is unvaluable. Dont be scared of rotating bones. Aim to get the cleanest breaking point in your mesh.
Remember to check those zones I wrote about earlier and have fun! Skinning is an important process and takes time. The more you practice the better you will become!
If you want to see what's the next step, read my post about how to make visemes for your avatar and configure it inside Unity!
First Steps: What are we looking for?
In order to begin the process, we need to define what we want to do. From a medieval wagon to a SciFi antenna, the desired item we want will define all the work we must do. Its not the same to build an item from an apocalyptic world or an object from the far future. All the workflow, from texturing to modelling will change depending on what are we looking for.
In this case, our choice is gonna be a Cliff Scaffold. For this purpose, we are gonna use a concept. This is not our goal, its just a support image to define the pieces that we need.
At first sight we can break down groups of pieces and we can add some more:
After the break down, we need to think how this will be build. We are gonna make a few pieces from each group to achieve some variations. But first, we need to talk about this damn materials.
Materials or "How trim textures make us happy"
Its time to talk about materials. With all the above in mind, we need to define how our textures will work and look alike. How many textures do we need? A texture for every channel in every material? NO. Instead we will "combine" all the materials in one by doing a clever use of the UV space.
We are gonna choose a trimmed texture instead of some unique textures. If you haven't worked previously with any texture of this kind, you can take a look on how to create a Trim Texture by reading the Trim Textures: Making Of article.
With a quick previsualization on "what" do we want and "how" to do it, you should be able to anticipate all your visual needs and material count.
The core idea is to use this Trim Texture to texture almost everything by UV tiling one of the two axis (U or V). You can see that our texture consist in three types of wooden planks, for all the wooden pieces that we got made after the reference inspiration; a metal section for details and to add some more style to our props (for example for ends and decorations); and a rock section for the basement and maybe some big pieces. All of these three types of "material" are going to be tiled in the U axis of the UVs. With only this three layers we can texture all of our asset. In order to achieve this goals, we are gonna keep in mind this texture in the modelling phase.
Also you should keep in mind the Texture usage and Model visibility. When texels come into scene, you will need that the textures match the mesh need of texture, having a healthy texel space ratio is a good way to make your models look nice wherever they are placed in scene. Here you can see that we have dedicated almost 2/3 of the texture for wood. Meanwhile rocks and ironworks remain in smaller space.
Last but not least, this small guide is focused on a material that only uses the color/albedo channel, we have baked the AO, bumps and made some light details but it can work well also with a full PBR material (Albedo, Normal, Metal/Rough, AO, Height, Emission). You will just have to do the texture work for every channel matching their space and watching out to not overlap the information between channels. Also a good workaround for bigger environments would be packing the trim textures by material, specially the ones that use specific information extensively like the emission or metal channel.
Modelling our pieces.
With our texture already done, we have to create all the pieces of our asset with our modelling toolkit. The core idea of modular environment design is to create the less models possible and give a unique look to each one.
The objective is to get a group of diferent pieces like pillars, planks or decoration. Three variations from each type of item should be enough for avoiding repetition. Don't create 3 exact same pillars, instead do shape variations or even small compositions to match your reference moodboard. After modeling our assets, our scene should look like this.
We have build different pieces. Some short and long pillars. Some planks. Some decoration. In conclussion all the things that we need for build different structures. We should work in the UVs for texturing. If you dont know how Trim Textures Work, you should read the Article of Trim Textures: A New Hope
With all the modelling done, its time to import everything to our Game Engine. Here we use the Unity Engine but the process is similar in Unreal, Blender, CryEngine etc.
Building Props from pieces on Unity 3D.
The final step of our work has come. By importing our assets to Unity, we begin to build our props.
In Unity you can get your hands on prefabs for your environment compositions or just keep them tied together in scene by parenting, the choice is yours!
Prefab design will keep your assets interoperable between scenes without the need to rebuild them from scratch but they will be likely harder to modify once they are built. Prefabs store the transform and all component data for each gameObject in the prefab parent. But this make it more sensible to mesh editing after building.
On the other hand keeping your assets parented but not prefab-ed will make them more tedious to modify (specially if you have different scenes) but easier to mesh modify them.
By prefab-ing we can build dozens of props that in the end are calling the same meshes and doing the same material batching. This will heavily optimize the runtime stability and scene load. Just parent some of them to the transform point you want and drag them from the Hierarchy to your Project / Prefabs folder!
With our pieces already in unity, we must invest time in build some PropsBy investing more time we can create some more assets to fill our scene but in this case we only want to show the idea of building a modular asset.
By spending some time on composition we can achieve our goal to set a scene from our modular asset. In this example we have build a hut, a platform with a ladder and some boxes that use the same material and meshes. All wooden, all optimized.
In conclusion, with this workflow you can build a wide range of props
that in fact are using a few models.
In this case we have created a simple asset with some wood, metal and rock but applying this workflow to other styles or goals, we can get quick and complex models to use it in our projects. The mastery resides in creating a balance between generic and modular props to create the general look and composition and create a layer over it of specific content, for example to add a Boat, a fishing net or by just creating something that doesn't match the trim-material features but fit in our environments.
Dominating this workflow will allow you to create
quick scenarios without much effort and a good optimization.
Isn't it great when you talk with somebody online and you see his mouth moving while he talks? It really add ups to the experience, specially in Virtual Reality.
That's what this is about. Creating different shapes so you can see yourself talking when you look at a mirror.
It's the little touches that makes something good to something better.
Let's say you already have your model done, it's also rigged and skinned so its ready to go. But, you want to make some blend shapes because in-game they look neat and funny.
Well, let's make them!
First, we need to know how many blend shapes we need to make. VRChat uses 17 different blend shapes. These are:
It's important to know that these shapes that we are going to make will need to have a very specific name. For example, aa is called vrc.v_aa; ch is called vrc.v_ch; etc...
The only exceptions to this rule are the first 4 of the list. Their names will be vrc.blink_left, vrc.blink_right, etc...
As you can see in the image, there is no "." in any of the names, and that's because Maya doesn't let you write dots in the names. We will roll with it for the moment.
Duplicate your character and move it to a side. Hide what is not necessary and show what it is.
Use an image of reference to know how to shape the mouth depending on the shape you need.
This gives you a general idea of how I made the different shapes of the mouth depending on the viseme.
You can see that there is not any vrc.blink_right or vrc.lowerlid_right, but I will talk about that later.
Another thing to keep in mind is that even if vrc.v_sil doesn't change the shape whatsoever, you must change something regardless. When we use Blender later, when exporting, if Blender detects that "sil" it's the same as the base form, it will remove "sil" from the blend shapes. Move a vert a little bit, one that no one will see, on the back of the mouth, for example.
Now that we have every shape done, we will use the Shape Editor.
Open the Shape Editor in the sculpting tab in Maya. Or going to Deform>Blend Shape.
Now, select one shape that you created and then select the original model. Go to the Shape Editor and click on "Create Blend Shape". Repeat this with all the 17 shapes.
Before, I said that I didn't have any blink_right nor lowerlid_right and that's because you dont usually need them. If the character is symmetric, you can duplicate your blink_left, select the new target and in the Shape Editor go to Shapes > Flip Target.
This will create a mirror effect and making the right eye to blink. You should change the name once it's done.
Export and Import
We have every shape ready, so now we will export all the package. Select all the shapes, meshes and bones and go to export.
Be mindful of checking in the box of Animation and making sure that Blend Shapes is activated too, because if it's not, it wont export correctly.
Write the name you want and export it.
Now we will open Blender, where we will change the names of the shapes to the correct one.
Open a new scene and delete the objects that get created all the time. Camera and light too.
Then, import the file we made earlier.
Navigate through the menus to find the Shape Keys sub-menu.
Here you can change the names of all the shapes. Delete the first "_" and replace it with a "."
The last thing you have to do is to re arrange all the shapes to be in order. The order is the same as the list that I wrote at the beginning.
Once that's done, export as fbx.
You should have your Unity latest stable version already set up. If you don't, check this guide out made by my friend Alejandro Peño where he explain how to set it up.
With the character imported, we will add a new component called VRC_Avatar Descriptor.
We will draw the mesh into the "Face Mesh" section. All the visemes should appear below there.
Now just click on each section and select the corresponded viseme.
Once it's finished, you can upload the character like you usually do. Again, if you don't know how, you can check this guide:
Blend shapes visemes are a great way to give life to your avatars in VRChat.
I would 100% recommend using them in your future avatars.
Depending on the model it takes around 30 min to an hour to create all the shapes needed, and they look great.
It's a lot of fun making these, so give them a try!
Back in May Oculus quest was released, it is a standalone device that allows you to use VR without needing to use any PC or wires. Until then you needed a high end computer to run games or experiences on VR so developers and creators didn't had to reduce as much when creating content or avatars for VRChat.
During 2018 Q4 Polygonal Mind's team made a challenge of making 100 characters in 100 days, you can check more about it here.
Me and my friend Alejandro Peño joined the studio as interns and were tasked with a project where we had to prepare, optimise and upload over 100 characters to VRChat for the Oculus Quest.
It was challenging workload but through consistent job, we were able to transform this characters into optimised avatars for VRChat.
Some characters have proven to be more difficult than others, so I will make sure to explain you what problems I faced when fixing non optimal characters and how I managed to solved them. Even though we used Maya in the studio, any of this knowledge is applicable to any 3D modelling software.
So I'll recap a series of problem I faced when setting them for VRChat.
Lets start optimizing
VRChat team provided the following rules to follow when it comes for Quest avatars:
Step 1 - Reducing Textures
This might be the easiest of all steps.
All the characters used 2048x2048 textures. So we had to reduce them to the desirable size.
In Photoshop, we created a new project with 1024x1024 pixel resolution. And then we imported all textures. Once they were all in and adjusted to the box, we exported each layer as an independent PNG.
Since they already had the appropriated name, we had 100 textures ready to go.
Step 2 - Polycount reduction
Most of the models had the right poly count, but some others didn't.
Franky's head is a clear example, it had 12572 triangles.
Here are some rules we follow when it comes to reducing polygons:
... wait... What if the maps seams are non optimal?
What can you do when there are map seams literally everywhere? That's what happened to the 50th character, Samuela.
We duplicated the model, and started deleting edges without thinking too much about the seams or texture, since we were going to make a new UV layout once the model is reduced.
Once in Zbrush with every mesh and the old texture imported, we took the old Samuela model, subdivided it and made the texture to be poly paint.
Beware, Zbrush applies color to the model's vertex, so you will need to subdivide your model until it reach a million of points so you can keep as much detail as possible of the texture.
Time for to project the high model polypaint into the new one by subdividing the number of polys of the new to match the old one and now simply project the old Samuela to the new one. Repeat this part for every subdivision you have until you get enough texture detail on the new model.
Note that projection might not be precise and you might have to improve the texture in Photoshop.
Adding mouth and eyes into an existing model for Visemes
This part is completely optional. But it really gives your characters life when they are in game.
For a quick turnaround what we did was:
For the rig, we used Mixamo. Mixamo is a web page that rigs and skins automatically given some variables like the position of the wrists, elbows, knees, chin and groin. For the most part, Mixamo did a pretty good job, specially for all the humanoid characters. But for the not-so-human, you had to edit the skin to have a great result. How to fix those is a topic for a different dayone .
We'll talk about this deeper in a future post.
Like many of you reading this we firstly uploaded the characters to VRChat thinking only for PC users so all the materials were left with the Unity's default shader, but Quest avatars requires a mobile diffuse shader, so we had to change them.
If you have followed a good naming convention, this will only take 1 minute. For example we add a mtl_ prefix to all our materials. In Unity type the material prefix to quickly select and change them all at once.
100 characters are a lot. But like I said earlier, with some structure and consistent work after 3 weeks, we made this happen. At Polygonal Mind, we use Notion.so to have all our projects and task organised.
With that being said There were a bunch of characters that needed little to no optimisation, but some others that needed almost a full rework. This stuff takes time. Especially if you count them by the hundreds.
I hope this guide helped you to optimise your avatar for Quest users, it was a challenging project for us, but the work pays off very quickly once you see players wearing them in game.
So sit back, put on some music, and start working. It's been really fun making these and the paid-off of seeing avatars you've been working on being used by other people is a great feeling.
Pedro Solans was an intern and now junior animator working at Polygonal Mind's in-house team.
Daniel García (aka Toxsam) is the founder and creative director at Polygonal Mind.
First of all, what's a LOD?
LOD (Level Of Detail) is a method of game optimization that decreases the complexity of a 3d mesh (or lately even shader, textures, etc.) as it gets further away from the player, and it's usually used in conjunction with other optimization techniques like culling. The most common execution of this method consists of utilizing a secondary mesh, which replaces the original with a lower resolution one at a certain distance to avoid having too much unnecessary detail. The initial mesh can be generated automatically, but you will have to fix stuff manually!
Using MayaLT to create automatic LODs
Maya has a built-in tool that lets you create automatic LODs based on either camera distance or % of the total poly count. You can access it in Edit > Hierarchy > LOD (Level of Detail) > Generate LOD Meshes. Click the box right next to this last button to get access to the options instead of the defaults.
1. First, duplicate the mesh we are going to LOD and hide it. This is the equivalent to duplicating the background layer and working in a regular layer in Photoshop, just to make sure we keep the original material in case something went wrong.
2. Then use the tool to create as many LODs as you need.
3. Extract the meshes from the LOD group by unparenting them to examine and fix any problems
you could find, individually.
4. Apply the material of the original mesh to the LODded one, so you can see how close to the original mesh's look it is.
5. After that, follow the troubleshooting steps below to fix the problems you caught.
6. Finally, when you're happy with the result, rename everything to your chosen naming convention
Using LODs in Unity 2019
Unity has a built in LOD system. First, you need to have the correct hierarchy for it to work correctly. Import your mesh and the LODs of that mesh to the scene, create a new empty GameObject, put the original mesh and its LODs inside the empty GameObject. Then select the parent > go to the Inspector Panel > Click on Add Component > search for "LOD Group"
You'll then see 4 slots, for different LODs. You can add and delete as many as you want by right clicking over them and selecting "Insert before" or "Delete", respectively.
You can now assign a mesh you have prepared before to every LOD by clicking on the big square "Add" button by having previously clicked on the desired LOD.
Also, by dragging between the transition of the LODs from left to right, you can set up the separation of the different LODs based off the distance. By dragging the camera icon you can preview the system working as it would happen in real time when the player was getting closer or further from the mesh.
As it happens with any tool, if you just leave the results that got automatically generated the result is going to be a lot worse than if you tweak it a bit to fit your needs (for example, just drag and dropping Smart Materials vs actually understanding material layering and utilizing Smart Materials to accelerate your texturing process in Substance Painter or Quixel).
The beauty of this is the combination of the automatic process and the human input, generating a faster mesh than doing it all manually, but getting a better result due to the fixes done by the user.
Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.