Premise Back in May Oculus quest was released, it is a standalone device that allows you to use VR without needing to use any PC or wires. Until then you needed a high end computer to run games or experiences on VR so developers and creators didn't had to reduce as much when creating content or avatars for VRChat.
Before starting During 2018 Q4 Polygonal Mind's team made a challenge of making 100 characters in 100 days, you can check more about it here. Me and my friend Alejandro Peño joined the studio as interns and were tasked with a project where we had to prepare, optimise and upload over 100 characters to VRChat for the Oculus Quest. It was challenging workload but through consistent job, we were able to transform this characters into optimised avatars for VRChat. Some characters have proven to be more difficult than others, so I will make sure to explain you what problems I faced when fixing non optimal characters and how I managed to solved them. Even though we used Maya in the studio, any of this knowledge is applicable to any 3D modelling software. So I'll recap a series of problem I faced when setting them for VRChat. Lets start optimizing VRChat team provided the following rules to follow when it comes for Quest avatars:
Step 1 - Reducing Textures This might be the easiest of all steps. All the characters used 2048x2048 textures. So we had to reduce them to the desirable size. In Photoshop, we created a new project with 1024x1024 pixel resolution. And then we imported all textures. Once they were all in and adjusted to the box, we exported each layer as an independent PNG. Since they already had the appropriated name, we had 100 textures ready to go. Step 2 - Polycount reduction Most of the models had the right poly count, but some others didn't. Franky's head is a clear example, it had 12572 triangles. Here are some rules we follow when it comes to reducing polygons:
... wait... What if the maps seams are non optimal? What can you do when there are map seams literally everywhere? That's what happened to the 50th character, Samuela. We duplicated the model, and started deleting edges without thinking too much about the seams or texture, since we were going to make a new UV layout once the model is reduced. Once in Zbrush with every mesh and the old texture imported, we took the old Samuela model, subdivided it and made the texture to be poly paint. Beware, Zbrush applies color to the model's vertex, so you will need to subdivide your model until it reach a million of points so you can keep as much detail as possible of the texture. Time for to project the high model polypaint into the new one by subdividing the number of polys of the new to match the old one and now simply project the old Samuela to the new one. Repeat this part for every subdivision you have until you get enough texture detail on the new model. Note that projection might not be precise and you might have to improve the texture in Photoshop. Adding mouth and eyes into an existing model for Visemes This part is completely optional. But it really gives your characters life when they are in game. For a quick turnaround what we did was:
Rig For the rig, we used Mixamo. Mixamo is a web page that rigs and skins automatically given some variables like the position of the wrists, elbows, knees, chin and groin. For the most part, Mixamo did a pretty good job, specially for all the humanoid characters. But for the not-so-human, you had to edit the skin to have a great result. How to fix those is a topic for a different dayone . We'll talk about this deeper in a future post. Materials Like many of you reading this we firstly uploaded the characters to VRChat thinking only for PC users so all the materials were left with the Unity's default shader, but Quest avatars requires a mobile diffuse shader, so we had to change them. If you have followed a good naming convention, this will only take 1 minute. For example we add a mtl_ prefix to all our materials. In Unity type the material prefix to quickly select and change them all at once. Conclusion 100 characters are a lot. But like I said earlier, with some structure and consistent work after 3 weeks, we made this happen. At Polygonal Mind, we use Notion.so to have all our projects and task organised. With that being said There were a bunch of characters that needed little to no optimisation, but some others that needed almost a full rework. This stuff takes time. Especially if you count them by the hundreds. I hope this guide helped you to optimise your avatar for Quest users, it was a challenging project for us, but the work pays off very quickly once you see players wearing them in game. So sit back, put on some music, and start working. It's been really fun making these and the paid-off of seeing avatars you've been working on being used by other people is a great feeling. Post by: Pedro @05predoPedro Solans was an intern and now junior animator working at Polygonal Mind's in-house team. Daniel @toxsamDaniel García (aka Toxsam) is the founder and creative director at Polygonal Mind.
First of all, what's a LOD? LOD (Level Of Detail) is a method of game optimization that decreases the complexity of a 3d mesh (or lately even shader, textures, etc.) as it gets further away from the player, and it's usually used in conjunction with other optimization techniques like culling. The most common execution of this method consists of utilizing a secondary mesh, which replaces the original with a lower resolution one at a certain distance to avoid having too much unnecessary detail. The initial mesh can be generated automatically, but you will have to fix stuff manually! Using MayaLT to create automatic LODs Maya has a built-in tool that lets you create automatic LODs based on either camera distance or % of the total poly count. You can access it in Edit > Hierarchy > LOD (Level of Detail) > Generate LOD Meshes. Click the box right next to this last button to get access to the options instead of the defaults. General Workflow 1. First, duplicate the mesh we are going to LOD and hide it. This is the equivalent to duplicating the background layer and working in a regular layer in Photoshop, just to make sure we keep the original material in case something went wrong. 2. Then use the tool to create as many LODs as you need. 3. Extract the meshes from the LOD group by unparenting them to examine and fix any problems you could find, individually. 4. Apply the material of the original mesh to the LODded one, so you can see how close to the original mesh's look it is. 5. After that, follow the troubleshooting steps below to fix the problems you caught. 6. Finally, when you're happy with the result, rename everything to your chosen naming convention and export. Troubleshooting
Using LODs in Unity 2019 Unity has a built in LOD system. First, you need to have the correct hierarchy for it to work correctly. Import your mesh and the LODs of that mesh to the scene, create a new empty GameObject, put the original mesh and its LODs inside the empty GameObject. Then select the parent > go to the Inspector Panel > Click on Add Component > search for "LOD Group" You'll then see 4 slots, for different LODs. You can add and delete as many as you want by right clicking over them and selecting "Insert before" or "Delete", respectively. You can now assign a mesh you have prepared before to every LOD by clicking on the big square "Add" button by having previously clicked on the desired LOD. Also, by dragging between the transition of the LODs from left to right, you can set up the separation of the different LODs based off the distance. By dragging the camera icon you can preview the system working as it would happen in real time when the player was getting closer or further from the mesh. The Conclusion As it happens with any tool, if you just leave the results that got automatically generated the result is going to be a lot worse than if you tweak it a bit to fit your needs (for example, just drag and dropping Smart Materials vs actually understanding material layering and utilizing Smart Materials to accelerate your texturing process in Substance Painter or Quixel). The beauty of this is the combination of the automatic process and the human input, generating a faster mesh than doing it all manually, but getting a better result due to the fixes done by the user. Post by: Alex @VontadehAlejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.
The impactHaving the avatars being featured by VRChat. Unity's social media account reposting our content during the challenge. Increase of the followers both on Instagram and Twitter. Idea BackgroundIt all started back in September of 2018. I wanted to start investing time into develop new game art styles using Unity, I was not sure how to start but then I found about the 100 days drawing challenge by Amanda Oleander on her Instagram. Her challenge and commitment inspired me so much that I did my own version of it by doing 100 characters, 1 character a day for 100 consecutive days. The condition for me was to make an Instagram and Twitter post with a moving character every day, so I had to create a steady workflow that could work for all the days of the challenge. The challenge processTo succeed on a challenge like this you should try to define a process, and try to follow through it everyday, this will help you focus and will slowly reduce the time you have to dedicate to the challenge overtime, since your brain will be learning and adapting to the tasks. If you don't know how to set up a process, is okay, most of the times processes are the result of repetition, so just start, do it once, and then write down the steps you made to get to the final result, then the next day try to repeat them. Over time, the process takes shape, evolves and improves. During the 100 days a lot of people asked us how we managed to make 1 per day so we'll be taking a general overview of the character creation process. Most of the characters follow this scheme. - Conception
- Fixing retopology with MayaLT
- UVs and TexturesTo create the texture, we like to use Adobe Color in the studio. Easily help us find color schemes that work for out characters. - Rigging and Animation
- Unity scene set upFinally we got to the point where I wanted invest more time at. Unity. The hole point of this madness was to force myself to use Unity as a quick tool to develop new visual concepts and ideas for future projects. Not gonna dig into every detail of what I did with Unity, but there are a couple of tools that helped me to save time and get great results during the 100 days.
- Final steps
- BONUS: extra tips to iterate faster on a challenge like this one
Uploading them to VRChat as avatarsHalfway through the challenge we came up with the idea of giving a second life to all the characters by transforming them into avatars for the Metaverse. They were all already rigged with Mixamo so we knew by experience that they could be used, at least in VRChat. Months later I decided to give it a go to the avatar idea with the help of 2 interns in the studio. Initially we just wanted to give them simple rig and upload them into VRChat, but after a few days into the work, we got reach out by the VRChat team, they loved the variety of our characters, and they suggested us to give them some extra love, by adding visemes and optimising them for the new Oculus Quest release, this way they could be used by even more players. So we improved them and created a VRChat World to gather them. I must admit that Investing some more time into adding visemes made the characters way more interesting and fun to use! Here is a screenshot of our Notion.so board in the middle of the project. There is a lot of documentation already about how to upload avatars into VRChat, so we wont be covering any of it on this post, but, we'll be releasing another blogpost with some tips for optimising avatars for Quest using MayaLT later on. Next Steps for this projectAs you can see in our roadmap for now our most closest goal is to keep uploading all the characters into VRChat with visemes, we're really close to have them all up and ready to use. At the same time we'll be improving the world too. After this our next milestone is to tokenize this avatars using the blockchain, our final goal is to release all model files for free to download on our site, as "Open source avatars" so anyone can use our avatars in any virtual world platform or project they're developing. During the time I was writing this post, the guys at LIV reached out to us to use the avatars for their platform, so you'll be able to use them for streaming Beat Saber soon. If you have a VR platform or a project you might wanna use our characters at, feel free to reach out so you can have a test before we make the open source release. Conclusion
If you want to see the characters during the challenge, you can check our Instagram and Twitter accounts. Post by:Daniel @toxsamDaniel García (aka Toxsam) is the founder and creative director at Polygonal Mind.
This post is about a mobile premium game we are working along with Crescent Moon Games, called Ravensword Legacy, which is currently on development.
This week I had to work revamping the awesome characters that were already made by another team member in order to allow them to talk. After some research, I found a plugin for Unity called LipSync Pro that allows you to add some keyframes for Audio Clips, so that the character that is talking moves their mouth accordingly (it allows for some other blendshapes, like blinking or yawning, and even some presetted expressions like angry and happy among others, so you can assign each expression to each line of the character).
The core of spoken languages
For this kind of work, game developers usually group the phonemes together, for example the letter "k" in "key" sounds the same as the "c" in "car", hence needing only one phoneme for that sound. Same with "m", "b", and "p" and so on.
Adapting to the new needs
I proceeded to modify the models and open their mouths, add the inside of the mouth (commonly called mouthbag), tongue and teeth. I also had to modify the textures so the teeth, tongue and mouthbag were textured.
After this, I duplicated and modified three times the resting pose for the A, E/I and O phonemes. As the game is low poly and has pixel post processing and a limited colour palette (sometimes even as low as 8 bits!), too much fidelity and/or fluidity would make it look uncanny.
Each of these heads were exported as a single head with 4 blendshapes, using the modified mouth's ones as targets for the said blendshapes.
Setup of the system
I created a LipSync info .asset file from that Audio Clip via LipSync's Clip Editor (shortcut Ctrl + Alt + A) and started adding the phonemes that matched with what the line was saying. Having only 3 phonemes really sped up this process, otherwise it'd have been too tedious. After that was done, I saved the LipSync info .asset file in the same folder as my Audio Clip.
Each of these black markers means that the mouth will change to the specified phoneme at the specified time. Once this was done, I went back to the prefab of the character head, added the LipSync script and assigned the head mesh as the main mesh, and the teeth as the secondary mesh. This means that the head blendshapes will drive the teeth ones too. I also assigned the Audio Output of this character to be the origin of the sound of the line, and dropped it into the slot.
I then specified which blendshapes were to be assigned to which phonemes so that LipSync knew what blendshape it had to change everytime the timeslider passed through a phoneme marker.
Conclusion
And so this is the end result! It was a very fun experiment and I'll probably end up using this method again in the future for personal projects.
Please be aware the audio clip was a test one to make sure the plugin worked and it's not intended to be used in the final product, since it's a dubbed line from another game.
If this was helpful to you in any way please consider sharing it with your gamedev friends, we really appreacite your support!
Post by:Alex @Vontadeh
Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.
Passionate about videogames, vivid tutorial drinker and cat lover.
This post is about a mobile premium game we are developing in-house, called Ma'kiwis.
Long story short, Ma'kiwis is a adventure game for mobile devices where you play as a shaman leading mini tribe people to safety.
This week sprint was about adding few items to the game and make the firsts cutscenes into the game, so we could start testing the game workflow with them inside the game. I was assigned to work on the cutscenes that gives the player an introduction to the game's plot and gameplay, basically the tutorial.
Story boards of the cut-scenes in Level 1
Maya animations + Unity's Animation System was too messy
After trying for a few days I felt the system we were using previously was a bit limiting and it wasn't letting me do basic things like blend the cameras or time events like camera shakes or starting and stopping Particle Systems. We even had to animate the character interaction together as a single GameObject using Unity's Animation system.
The thing I disliked the most of this previous system was the inability to blend between pre-fixed cameras. This meant that we couldn't go back to the Main Gameplay Camera after a cutscene which resulted in cuts every single time; either that or fades to black. This felt too repetitive in my opinion since there are a lot of other camera cuts when simple events occur ingame, like activating a switch or picking up a collectible. I wanted something a bit more dynamic that attracted the player's attention, hence the blending between cameras was really needed. After talking with the rest of the team, we decided to upgrade the current Unity version we were using e (from 5.6.4f1 to 2018.2.0f2) so we could use the Timeline (Timeline was included in Unity 2017.1) + Cinemachine.
Example of the Popping problem when using Animator
Using Cinemachine
Cinemachine is a free Asset developed by Unity that brings a lot more of freedom and a more cinematic look to the Unity camera system, allowing you to control the Field of View, Depth of Field, Camera Collision and the so needed blending between cameras, among other great features.
Cinemachine + Timeline is a very powerful combination!
This is done because we actually blend between the Main Camera to the position of the Cutscene Camera, which caused a stutter because of the Following script wanting to go back to the gameplay position. Basically there were two parts telling the camera what to do: the Following script was telling it to stay aiming for the player and the Gameplay one (which we were blending to) was telling it to follow the spline/bezier until the position was the Cut-scene cam position.
This way we always have the position of where the camera should be during gameplay to blend back after a Cutscene.
Another possible solution could be to have a Master Camera and using the Gameplay Camera as a Cutscene one, so the Master could blend between them without stutter, but that would've meant that the whole camera system would have to be changed and we couldn't afford that.
Hope my struggle with the cutscenes can help someone. :) Post by:Alex @vontadehAlejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team. |