Blog |
Adding the bones Of course, the eyes won't move by themselves, they need a bone that will make them bounce. I'm sure you followed our rigging tutorial to easily rig your character with mixamo and fix any nasty problem, if not, be sure to check it out here: Fix and reset your Mixamo avatar rig Using the Create Joints tool, add your bones wherever you want. Make the bone chains as long as you want so it looks as smooth as posible. Want to give your character even more personality? Use blend shapes visemes to add facial expresions while talking. You can easily follow our guide here Create and upload a VRChat Avatar with blend shapes visemes Now, export your character making sure you have the skin weight correct and the skin checker box ticked. Time to bounce Next stop, Unity. Be sure to have the Dynamic Bones asset installed in your project because it's what we need to be able to move the new bones. Check if everything is correct and the skin weight is working properly. Drag and drop the DynamicBones.cs script onto your character mesh or add a new component on the inspector tab. Time for some tweaks.
By default, Dynamic Bones gives pretty good results for the bones to interact with meshes and being affected by gravity, but my case is a little bit special, and we will have to adjust it correctly. Let's start First of all, we need to assign which bones we want to be dynamic. For that, we will select the "Root Bone", that is, the bone before all of our dynamic bones. Testing Test, test, test. Move your character. Rotate it. Make sure it does what you want. You can get a lot of different effects by just adjusting a couple of parameters. This is definetly not what we want. While the eyes move accordignly on the Y and X axis, we don't want to move in the Z axis. Eyes in place, but we need to tweak how the eyes move and behave when moved. These are all the options in the image below. Knowing what each option do, now is time to test. Tweak some settings and try by yourself.
If you don't know how to do it, check out our guide about how to upload you avatars into VRChat Conclusion Dynamic bones are a simple yet super effective way to give life to your characters. With just a little bit of tweaking you can get really good results. Making you characters more dynamic and life like. Moving clothes, hair, tails and eyes are just the beginning, your imagination is the limit here. Be creative! Pedro Solans 3D ANIMATOR Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
0 Comments
Premise "Create, explore and trade in the first-ever virtual world owned by its users." With this welcoming sentence Decentraland invite us to dive into one of the first blockchain-backed platforms that enhances an alternate reality to express our ideas and projects into the world. Launched in February 2020, Decentraland has seen its potential grow exponentially as different blockchain-related companies and projects have placed a headquarter or contact point to connect bridges with their audience among the metaverse.
Finding the headquarter purpose Ahh, yes here we are at the start point of this journey. When it comes to develop buildings with a specific goal we must ask first what is going to be used for and the initial division of their space and how much space do we have. Two words: What and Where. What does this building gathers? Nugget's News had a clear vision on how its content had to be divided and placed:
Where is this building at? Having our general info distribution by floors, we had at our disposal a 2x2 LAND state at Decentraland, these meant:
Initial ideas, visual conceptualization The project came in to modernize a previous building designed for the same purpose, but the client wanted a more unique approach rather than a realistic building one. When it comes to unleashing imagination. You don't have to care for concrete covers, pipes or electrical equipment. There is not such thing as supporting pillars or weight constrains and everything you make it has a purely esthetical reason. Basically: Following the need to modernize its appeal, for this project we played with the idea of having a hangout area, learning place and screening place in a building that had a sobriety but elegant design. We chose modernist architecture as a main reference and mixed it with the idea of a giant logo surrounding the building. Blockout, helpful space design After the references have been set, it's key to start developing a mock-up or quick model that draws the volume boundaries of the building and mixes properly what we want (#00) and how we want it (#01) based on the purpose of the building and the references chosen. This give us a proper shot to iterate on and we can start trying it out in-engine to match sizes. Also it gives us a scope on how many objects do we have to craft and anticipate some possible constrains based on our goals. From this blockout we extracted specific references for chairs, counters and every visual needed. It was time to get into shapes. Shape development As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design. As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design. After different iterations and redesigns we get to the final design. All running in engine. After we have all shapes made and initial composition set, it is time for us to go to the next step, design materials and apply them. Material design and Animation execution As we said before, the Nugget's News building look comes from a Modernist architecture with some art-touch made by us. This means that the building has to be mainly using "modernist-like" materials such as concrete, clean wood, painted steel, brushed grey iron and a little bit of marble. But we cannot forget this is a corporate building, so we have to mix those ideas with matching colours from the colour palette of the Nugget's News website. This is basically a palette of 5 blues and a coral variant made for contrast with the rest of the components. Both main ideas mixed together result in the following material (excl. Albedo dyes) Of course, creating materials is not a one shot job, we have to iterate and local-deploy constantly to reach the result we want and at the same time play with all the constrains, this means that we have to cleverly reuse all the materials along the building and minimize those that can only be in one place or are too specific. Apply different materials in different polygons set in the same mesh and etcetera. An example on material design evolution is the wall (and floor) of this room and how it evolves until it gets nailed. Regarding Animations, this is something I was kind of thrilled to see done, the idea with the Nugget's News immense logo being animated and morphing during the whole experience. We found pretty interesting that the shape of the logo has an almost continue tileable curvature, so we could play with it and invert the curvature as an animation, making the logo continuously loop non-stop by just morphing. A concept made to understand how the logo should deform, it exploits the fact that it already has a "natural" curvature and inverts it seamlessly. Technically it is something difficult to get right as the rig has to be correctly executed to drag properly the vertex by animation with the simplest model complexity possible. But once we have it right and complete the animation, we can have our hands on the next step, color and UV mapping. UV mapping this was something we came as the NN shape came along, as the logo itself is made out of triangles this benefits us and we can make the whole texture with a two gradients: one for blue to light gradient triangles and other for blue to dark gradient triangles. Determining info, labels and other communications Once the composition and all the visual aspects of the scene are ready to evaluation and have been checked in-engine, it is time to get our hands on designing information areas. We now basically are creating a good user experience by deep thinking and assess how the design will be experienced and ensure the best experience. This means to place everything in a way that is not tedious for the player.
A welcoming area, with out-links that take the user to other places related with the company, like Twitter or Web. Multiple panels that can be a calendar of events, a motif of the company or just general info about its activity. In the second floor is placed a second welcoming counter as the floor content is related but with a different mission, if we want the player to feel welcomed and understand what's going on this is the best way. Also there is again another link board so this connection is always close to the experience of the user. Different panels with different boards to explain specific materials and contents about the company are placed along the second floor in independent cubicles. Each cubicle has its own "Learn more" so the player goes directly to the expanded content related to what it's being told. A mid-floor between Floor 2 and 3 was set to amplify the use space and maintain everything airy. During the whole displacement along the levels, you can see where are you heading. A general view of the conference area, spacious and set to gather big meetings with announcements or "Ted talks". And a viewpoint on the same stage. Conclusion Once everything is in place and with the correct information set, there is nothing more to do but to upload our content to the metaverse and let the people experience it. This project has given us a fantastic opportunity to create a headquarter that had no relation with our daily work, as we are involved indirectly with cryptocurrencies but we focus in 3D developments and experiences. I do really hope to get my hands again on such different things as building the worlds others imagine. Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman. Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
What is the Static value If a GameObject does not move at runtime, it is known as a static GameObject. If a GameObject moves at runtime, it is known as a dynamic GameObject.
To apply only one type of static to an object is as easy as clicking on the dropdown arrow next to Static and mark all your desired values. Static can make a huge difference in how performance behaves in Unity, is a 101 in optimizing your work and getting it running into potatoes. The whole Static well used is the secret for smooth fps. In the Game view we can find a little tab called "Stats" there we can see how many batches are being processed by frame in the current FOV of the Camera set in scene. Without Lightbakes, Occlusion Culling, Batching Static and others this could easily rise as every feature needs a single calculation by object. This means (among others): calculate mesh visualization, apply material (1 batch per shader pass), lighting (if realtime 1 batch per object), realtime effects like particles, skybox... So you can get that Static-ing your objects is crucial, specially the one that tells the engine to don't modify anything from its render/transform: Static Batching. Before we get our hands on Static Batching, we will unravel the different types and do a brief explanation and usage:
You can find information about creating a good Occlusion Culling system in VRChat here: The Static Batching in the Gauguin Avatar Garden The Static Batching tells the Engine to combine similar meshes in order to reduce its calling count. But what is similar for Unity? Unity will combine all meshes that have the Static Batching checked and have the same materials, in order words, Unity engine combines by material. This reduces notably the amount of times its calling to render a mesh without killing its independence as an object. Combines the mesh but still considers all the GameObjects and their respectively components (Transform, Physics...) but they won't move. So Animated objects are off the table as they are Dynamic Objects and should be performance improved using other ways. As an example, here in the Avatar Garden we find that we have repeated along all the environment a set of assets that vary in Transform values but remain the same in all the others (Material settings), all these objects are marked as Static so Unity only renders:
... and so on with every Material match between all the Static Batched GameObjects. The result of Static Batching led Unity to combine all the meshes that used the same Water material or combining all the meshes that used the ground material. If you animate a GameObject by its transforms and has the Static Batching checked it won't move on runtime. Meanwhile if you have an Animation based on shader values it will get animated. This means that Shader modification on runtime is still possible for Static Batched meshes. The river in the Avatar Garden is an animation of the values of the Material Shader. The engine is told to interpolate two values in an amount of time and it does it through the Shader, this allow us to make this type of effects and at the same time maintain low the draw calling rate. Also notice that the shadow doesn't get displaced during the animation, this is due to the UV space they are using. Standard shader uses the UV0 for input texture maps like Albedo, Metallic, Smoothness etc and so all the values modified during runtime only affect the UV0 channel of the model. On the other hand lightmaps make use of the UV1 and they are not modified during runtime. For more information on how we created the river in the Avatar Garden, check our article about it! Reducing draw calls To draw a GameObject on the screen, the engine has to issue a draw call to the graphics API (such as OpenGL or Direct3D). Draw calls are often resource-intensive, with the graphics API doing significant work for every draw call, causing performance overhead on the CPU side. To reduce this impact, Unity goes for Dynamic batching or Static batching as we stated previously, good practices to reduce draw calls include:
This is an example of Atlas packing, all the objects from the exterior are using the same texture maps and so they will be draw called once. The Avatar Garden uses tileable textures and so it reduces too the number of materials. For color depth and variation we used Lightbaked lighting. Conclusion The static value is a must in the improvement of performance during runtime in Unity. Well executed can heavily impact on all machines and can let people with low-end machines running high-end experiences. You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Additional sources: https://docs.unity3d.com/Manual/GameObjects.html https://docs.unity3d.com/Manual/StaticObjects.html https://docs.unity3d.com/Manual/DrawCallBatching.html Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman. Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
Please note that this is VRChat focused but is something Unity Engine related issue. When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge. What are Colliders Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the same Object’s mesh. Every time you get yourself into a game, the player moves or you have "physical" interactions with the environment such as dropping a glass or throwing a rock collisions are getting in to work and behaving as solids, triggers or gravity affected objects. This sorcery works under default parameters but as in every game-engine you can set up colliders at your liking to match your desired interaction results. This definition could be applied to being able to walk on the firm ground but sink progressively as you go into muddy dirt or enter a flowing river. Colliders are basically interactions between objects that send messages to the engine to determinate if the object is colliding, and therefore can't proceed to go through it; or its triggering and then it can enter the other collider volume and (if set so) send an specific message. Here we could state that the player is walking along the map and decides to enter the river, this literal information could translate into colliders design in a way that define at which height the player is walking on and triggers that are modifying the player speed and turn it slower as it goes further into the river. This does indeed happen in our Avatar Garden environment: As the player will walk through the river, they will pass through the water mesh walking over the river soil instead Types of Colliders The three types of Collider interactions present in Unity are Static, Rigidbody and Kinematic. Each has an specific use, for example Static colliders are for GameObjects that are not meant to move or are not affected by the Gravity. Rigidbody Colliders are meant to be for objects that have "forces" applied on them and therefore gravity (and other possible set up forces) affects them in each frame (unless they are in sleeping mode). Last but not least there are Kinematic Colliders meant to be for Kinematic bodies and are not driven by the physics engine, read more about kinematics here. If we recreate this mockup in Unity and press Play, the engine will make the ball fall and the rock won't move. To apply a 3D Collider component into an object, we have different options at our disposal:
By default the Mesh collider will pick the Mesh Filter assigned mesh as the Mesh to represent the Collider. The Rigidbody component goes apart of the Collider component, giving independency and control over itself and acting as an "addon" for the static collider. It does also state if the interaction of the Collider Is Kinematic or not. Applying a Collider to a GameObject is as easy as adding the component in the Inspector Window once you have your GameObject selected. It is located in Component/Physics/ but you can also search for it by using the keyword Collider. What does the Physics Debugger After we set our colliders into scene, the best way to previsualize and correct colliders prior testing is the Physics debugger. You will find this window located in Window/Analysis/Physics Debugger This window will make the colliders overdraw over your meshes like if it was adding a layer of semi-transparent objects with a color that matches a type of Collider. Red for static, Yellow for trigger, Green for rigidbody and Blue for kinematic colliders. Here you can check/uncheck to display the Collision Geometry and also you can be able to Mouse Select directly your GameObjects by their Collider Volume. This window will drop a couple of features to configure to help us out as much as possible to configure and size the colliders. You can change the colours to match the best ones for you, change the transparency or set a random to create variation between them. The Physics debugger is going to be your best friend to spot flaws in your physics prior playing or even after noticing errors while testing! Triggers in VRChat For anyone experienced enough in game development will know that in Unity to activate a trigger you need a C# script telling the engine what to do when one Collider Triggers another Collider. The Trigger bool in the Collider Components tells the physics engine to let the other colliders go through the triggered. This is not possible in VRChat due to the custom script limitations and so it manages trigger by its Event Handler. Just add the VRC_Trigger Script and the SDK will add the Event Handler. From this point, programming in VRChat turns visual and no real code is needed. Just to be aware that some stuff changes from place and it turns more "Artist friendly". To add a behaviour as a result of a trigger, just click Add in VRC_Trigger component and start configuring your interactions. There are so many that covering a general use of this Triggers is barely impossible. So yes, the sky is the limit. Just remember that this operations can impact performance badly if they turn out to be expensive to execute. Applying Colliders in the Gauguin Avatar Garden (100 Avatars) Colliders in the Gauguin Avatar Garden by Polygonal Mind are a mix of Box Colliders and Mesh Colliders because we wanted to keep it simple and under our control on some other collider volumes. But that is not a clear reference to understand why is like this. When you get your hands on colliders, the first question you have to ask yourself is: Why I'm doing a Collider? Followed by: What is going to do my Collider? This two questions are essential to keep your collision complexity as low as possible. As you will want to make the Physics engine as smooth as possible to avoid artifacts in the player collision. Gameplay guides collisions. There is no reason to create a collider for every thing in scene. Instead think on how the player is going to play (or how you intend them to play). The big box in the back is to avoid players from going out the scene, encaging the player is a good way to free them to climb whatever without thinking of getting to the point of breaking the scene. Once again, one of the best practices in doing game development but this time on Colliders is doing the work by hand. Don't let the engine do the math for you without telling exactly what's doing. Evaluating the best suitable collider in each occasion will give you tighter control over the process of debugging. For example this tree logs doesn't use a Mesh Collider to correctly match their shape when the collider comes to work, but why? There is no reason to spend a complex collision here when the player will just want to notice that there is a log in their way but nothing else. Another example on Collider design goes here, you dont need to create a collider for everything. If we would have decided to create a collider for each small rock, the player would notice little bumps when walking and would be very uncomfortable or at least it wouldn't match the playable vision we had. Instead the ground is a Mesh Collider of the same Ground Mesh and the grass is not collideable neither. And the last practical examples we are showing here, I want to point out that our trees in the Avatar Garden have not collisions on the top. Because any player can reach the high tree tops and no primitive collider worked good for the curvature of our model; we decided to create a custom model just to fulfil this Mesh Collider need. Other things that we decided to use Mesh Colliders where Bushes and medium-sized plants. This was because there was no form to use primitive shaped colliders for such shapeless vegetation. We tried to keep as simple as possible the shape of all the Mesh Colliders or activate the "Convex" option to reduce to 256 tris if it was higher. Conclusion In collision, when it comes to game development, physics, or at least basic physics are the second stage of an environment development so keep them always in mind when building your worlds! They can be a truly game changer on how the experience is felt and enjoyed. Keep it simple but also keep it clever! You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Additional sources: https://docs.unity3d.com/2018.4/Documentation/Manual/CollidersOverview.html https://yhscs.y115.org/program/lessons/unityCollisions.php https://docs.unity3d.com/2018.4/Documentation/Manual/RigidbodiesOverview.html Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman. Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
#00 What is Occlusion Culling Occlusion culling is a process which prevents Unity from performing rendering calculations for GameObjects that are completely hidden from view (occluded) by other GameObjects. Every frame, a camera perform culling operations that examine the Renderers in the Scene and exclude (cull) those that do not need to be drawn. By default, Cameras perform frustum culling, which excludes all Renderers that do not fall within the Camera’s view frustum. However, frustum culling does not check whether a Renderer is occluded by other GameObjects, and so Unity can still waste CPU and GPU time on rendering operations for Renderers that are not visible in the final frame. Occlusion culling stops Unity from performing these wasted operations. https://docs.unity3d.com/Manual/OcclusionCulling.html This is basically the core knowledge of the whole system. This technique is used to avoid doing the real time calculations of gameObjects that are not in the Camera frustrum. This improves framerate and events performance during runtime. #01 How do you apply it to your scene To begin creating an "occlusion area" you need to check the box "static" or just click on the drop down and toggle "OccluderStatic" and "OccludeeStatic". Another approach is to select the desired gameobjects and toggle option in the Occlusion Window on the Object Panel. This tells the engine to consider the gameObject when calculating the Occlusion Data within its Occlusion Area (Unity considers the whole scene as a single area if you don't configure one prior bake). Occluders and Occludees The main difference between these two Occlusion concepts is pretty simple but its important to keep it in mind when building your scene occlusion areas and data.
An example of Occludee toggle would be for larger objects like grounds that should be considered separately to ensure its always rendered. #02 Culling Portals and Culling Areas Culling Areas are "cube" shaped volumes that "group" all the gameObjects inside of it, being only rendered if the camera is placed inside the same area. This works well if you have multiple enclosed areas, for our case, Occlusion areas didnt make sense as the whole scene is enclosed without visual walls among it. Occlusion Portals are for connecting two Occlusion Areas and so the Camera can render both areas by the Portal region area. The toggle Open option is for allowing or disallowing this conneciton. #03 Alternatives to Unity's Occlusion Culling system The Occlusion system uses a built-in version of Umbra. As any other system, it has its failures and improvements compared to other occlusion system engines. For other projects I personally have worked with Sector, an Asset Package found in the Asset Store that is very helpful and by the time I worked with it it was way better than the Unity's Umbra (more flexible settings as its main selling point). Another thing to keep in mind is the use of shaders with an excess of passes. Each pass is a whole mesh calculation for the material to be rendered and so Materials with more than two passes can be problematic for lower platforms like mobile. I state two as a minimum because transparent materials require two passes, furthermore they require the renderer to render whats behind the mesh rendered with transparency so they are quite the hard limit for low platforms. Copy of Batch example
Please keep in mind that "static batching" meshes get combined during runtime by the unity engine and so reduce the "meshrender" batching but keep the mat batching. #04 Occlusion in the Gauguin Avatar Garden The whole scene is marked as "Static" as there are no dynamic objects to keep in mind (the water is animated through material [not shader]). This made the Occlusion set up "easy" and not very hard to set the first steps. Keep in mind the size of the Occluder box you want to set, the bigger, the less "accurate" it will be, but at the same time the data will be much smaller. Each project needs its own balance. In this case for Gauguin we set the size to 1.5, meaning that the smallest "box" packing objects was of 1.5 units (meters) in x/y/z value. The Smallest Hole float is the setting to tell the camera how big it has to be the "hole" in the mesh to start casting what is behind it. This is specially tricky on elements with small holes or meshes with complicated shapes. The backface value is the value of directionality of a mesh to be rendered. The higher, the more "aggresive" the occluder will be, making the camera not compute meshes that are not facing towards the camera. Note that all the "black" shadows are objects that are not getting rendered as their lightbake remains on the mesh that is being rendered. Furthermore you can see the "area" that the camera is in with the correspondent portals. When there is none in scene Unity creates them for you. The best workaround is to always do it manually and never let the program do the math for you. For the scene, the ground meshes were kept without the Occludee option as smaller avatars made it through the ground floor due to camera frustrum and its near clip (this cannot be changed as it how it goes in VRChat). #05 cOcclunclusion You may find Occlusion Culling easy to set up or even unnecessary! But the truth is that is a vital piece during the final stages of environment development as is the manager, loader and unloader, of all the visual aspects of the camera, ensuring an smooth experience while mantaining the quality levels desired and keeping hidden from view but not unloaded from scene to ensure fast-charging-fast-hidding. Also each time you modify a gameObject property like transform or add/remove gameObjects from scene you should rebuild your Occlusion data as those gameObjects are still "baked" in the data. Keep it in mind specially when working with large environments or low-specs platforms. You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Isn't it great when you talk with somebody online and you see his mouth moving while he talks? It really add ups to the experience, specially in Virtual Reality. That's what this is about. Creating different shapes so you can see yourself talking when you look at a mirror. It's the little touches that makes something good to something better. Let's say you already have your model done, it's also rigged and skinned so its ready to go. But, you want to make some blend shapes because in-game they look neat and funny. Well, let's make them! First, we need to know how many blend shapes we need to make. VRChat uses 16 different blend shapes. These are:
To make things easier in the future, I highly recommend always using the same prefix for each name, so later in Unity it's almost automatic. The prefix being vrc_v_blendshapename. This gives you a general idea of how I made the different shapes of the mouth depending on the viseme. Another thing to keep in mind is that even if vrc_v_sil doesn't change the shape whatsoever, you must change something regardless. Now that we have every shape done, we will use the Shape Editor. Open the Shape Editor in the sculpting tab in Maya. Or going to Deform>Blend Shape. Now, select one shape that you created and then select the original model. Go to the Shape Editor and click on "Create Blend Shape". Repeat this with all the 16 shapes. Export and import We have every shape ready, so now we will export all the package. Select all the shapes, meshes and bones and go to export. Be mindful of checking in the box of Animation and make sure that Blend Shapes is activated too, if it's not, it won't export correctly. Now write the name you want and export it. Upload You should have your Unity 2018.4.20f1 or whichever version VRChat uses already set up. If you don't, check this guide out made by my friend Alejandro Peño where he explains how to set it up: With the character imported, we will add a new component called VRC_Avatar Descriptor. Now it will appear a couple of parameters you can edit. We are only going to modify 3 of them. View position, LipSync and Eye Look. View Position This parameter allows you to decide where the first person point of view is located. In other words, from where are you going to see inside VRChat. It is a no-brainer that we should put the little indicator at eye level. As close as posible to the eyes. Lip Sync How can we make our character talk? With this option right here! In mode, select Viseme Blend Shape. Now it will appear a Face Mesh tab. Using the little circle on the right, you can select the mesh where the Blend Shape Visemes are stored. In this case, since it's all the same mesh, we only have one option. Now we are talking (pun intended). Like I said before, putting the right names, makes our lives easier. Every single blend shape is in place. But just to be sure, give it a look. Eye Look If you have sharp eyes, you might have realized that blink was nowhere to be seen (These puns just keep coming). That's because we will use the Eye Look tab to configure it. Click on Enable and a couple of options will appear. Ignore the others and go to the Eyelids section and select the blendshapes option. Select once again the mesh where the BlendShapes are stored, and it will appear something like this. If something is not properly added, you can change it from here. Since we don't have only the Blink Blendshape states, we will leave Blink like it is and change the other 2 so they don't have any state at all. Like this: PRO TIP Use the preview button to make sure that everything works correctly. You can even check all the other blendshapes if you want! Once it's finished, you can upload the character like you usually do. Again, if you don't know how to do it, you can check this guide: Conclussion Blend shapes visemes are a great way to give life to your avatars in VRChat. I would 100% recommend using them in your future avatars. Depending on the model it takes around 30 minutes to an hour to create all the shapes needed, and they look great. It's a lot of fun making these, so give them a try! Pedro Solans 3D ANIMATOR Junior 3D Animator improving every day possible. Videogame and cat enthusiast. |
Categories
All
Archives
March 2022
|