Blog |
Gradients with position map Gradients in substance painter can be complex to do if we don't use the right tool, for this example we are going to make a gradient from red to gray in this character's pants. To start we will need to have the position map baked, If it isn't, you can bake it directly in substance painter. We start by creating a fill layer on top of the base color layer. We create a mask on the gradient layer and add a 3d linear gradient generator. Inside the generator we have the options of 3d position start and 3d position end, for the gradient to work correctly we have to pick the color of the model's position map. When we return to the material display mode you will see the result of the gradient. Bake lighting Baking lighting is a very simple and very useful process that can help us to highlight parts of the model and if the final model is not going to be affected by real lighting it can provide a more realistic touch. We will start by creating a fill layer with the properties that we want to give the light to which we will add a light generator. Inside the generator we will find the properties of the generator such as the direction and the height or the intensity. Once we have everything configured to our liking we will have to change the layer fusion option in case the light can work well with overlay, soft light although the best way is to try which one best matches the result we are looking for. Anchor points The anchor points are used for the intelligent masks to detect deformations of normals made in other layers that are not mapped to the normals map. In this example we have a layer with height and we want the mask that we are going to create next to paint the borders automatically.
To solve it we must enter the options of the smart mask and modify some attributes. The first is in microdetails change the first two parameters to true the second is at the bottom, in Micro height, there we select anchor points and look for ours (its recommended to name the anchor points correctly) after doing this everything should work correctly. Work the roughness The roughness map is one of the necessary ones in a PBR material, and also one of the most important when it comes to giving detail to a model, so it is important that in the PBR models the roughness has work and detail. In the first image we have a model with a practically flat roughness, in the second image we have the same model with a much more worked roughness. To achieve these results we can use smart masks, occlusion and curvature masks and of course textured brushes to give wear details. Although it may seem very basic, the brightness changes of a material are what help to give a model realism. Layer blending mode
Nestor Llop 3D Artist I am a 3D artist who is passionate about sculpting creatures and characters.
0 Comments
Double Joints or Add a Bone between two Bones
In order to do that first we select the bone we want to rotate and press Shift + P, to clear the parent in Edit mode, and we move the joints to create a small separation between them.
This small fix will also help you in creating the inverse kinematics. The example depicts an imaginary leg, to make easier to identfy the different parts of the model, but you could repeat this in arms and other type of joints. Model with better rotation on the joint Inverse Kinematics With inverse kinematics, the program calculates the position of the joints based in a given position and rotation relative to the start of the chain. How can we do this in blender? To explain it we will use the "leg" model we used before. First we will select the feet joint in edit mode and press CTRL + E to extrude a new joint we will rename it feetIK, reapeat the shame to create a point in the knee to act as pole target. Then press SHIFT + P to clear parent of both joints. Move the KneeIK away from the model to where you want the leg to bend to. Then got to pose mode and select the shin bone, go to the right panel bone constraint menu and select Inverse Kinematics. Select as the target the armature, and then pick the bone you want to control the IK, in this case the FeetIK. Choose the armature as the pole target (where chain of bones will try to bend to), select the bone you want it to go, in this case the kneeIK. Change the pole angle if the model rotates in a different angle. And change the chain length, if you have souble joints select 3, if not select 2. Select the the feet bone in pose mode, and in bone Constraint properties select Copy rotation. Select the armature as the target and the feetIK as the bone. If the Inverse kinematics is bends the wrong way or is going stray without bending that means that you joint needs to be closer, to the bending point. Go to edit mode and move the joint then check the inverse kinematics. Final Inverse kinematics on the model Custom Bone Shapes In the last segment we saw how to create a bone to create the inverse kinematics but seeing a lot bones with the same shape can be confusing. But we can change the shape to make it easier to identify. Once we have the meshes selected, go to the bones you cretaed before select the want you want to change the mesh of, and go to display properties, custom shape. There select the mesh you want to change it for. Sometimes the mesh doesnt look right when you change it, but you can edit in on the original mesh an it will change on the bones too. Remember to apply the changes from the object menu anytime you change something. You can only view the custom shapes on pose mode. Default Bone Display & Color Coding To further edit your bones, you can change the shape you view them, just go to the menu on the right panel called Object Data Properties, it has running man as an icon, and click viewport display. There you can check in front to always see the bones in front of your model. and change their shape, by default they are octahedral. You can also change the color of the shapes you made to substitute the bones, select the bone and on the Object data properties, bone groups, create a new group and change the defaul colour. It's recomendable if you have symmetric bone select diffrent color for the left and right side. Pose Libraries In blender you can save different poses, to access them inmediatly. In order to do that, put the bones in the pose you want to save, select the bones. Go to object Data Properties, pose library, there create a new pose library. Important check the shield on the right side the name, because if you don't change it even if you save the scene you will lose the pogress you made in the pose library. I recommend to save the base rig in a initial pose, so that you dont lose the original rig. Pose Library fo the leg Conclusion This are a small collection of tricks that I recopilated after having to animate some models that were needed in GLTF. It was a bit difficult since Blender is not friendly with people that learned maya or 3d max, or any kind of industry software. Thought they are making advances in making it easier to learn this software, making it able to put the industry controls, for example. It still needs work. Some controls change when you change to industry standards, that's why this guide it's for those that learned softwares like Maya but want or have to use blender, learn how the controls and where the basic things to work are. Laura Usón 3D ARTIST Passionate about videogames, movies and creatures. artist by day and superhero at night. Premise "Create, explore and trade in the first-ever virtual world owned by its users." With this welcoming sentence Decentraland invite us to dive into one of the first blockchain-backed platforms that enhances an alternate reality to express our ideas and projects into the world. Launched in February 2020, Decentraland has seen its potential grow exponentially as different blockchain-related companies and projects have placed a headquarter or contact point to connect bridges with their audience among the metaverse.
Finding the headquarter purpose Ahh, yes here we are at the start point of this journey. When it comes to develop buildings with a specific goal we must ask first what is going to be used for and the initial division of their space and how much space do we have. Two words: What and Where. What does this building gathers? Nugget's News had a clear vision on how its content had to be divided and placed:
Where is this building at? Having our general info distribution by floors, we had at our disposal a 2x2 LAND state at Decentraland, these meant:
Initial ideas, visual conceptualization The project came in to modernize a previous building designed for the same purpose, but the client wanted a more unique approach rather than a realistic building one. When it comes to unleashing imagination. You don't have to care for concrete covers, pipes or electrical equipment. There is not such thing as supporting pillars or weight constrains and everything you make it has a purely esthetical reason. Basically: Following the need to modernize its appeal, for this project we played with the idea of having a hangout area, learning place and screening place in a building that had a sobriety but elegant design. We chose modernist architecture as a main reference and mixed it with the idea of a giant logo surrounding the building. Blockout, helpful space design After the references have been set, it's key to start developing a mock-up or quick model that draws the volume boundaries of the building and mixes properly what we want (#00) and how we want it (#01) based on the purpose of the building and the references chosen. This give us a proper shot to iterate on and we can start trying it out in-engine to match sizes. Also it gives us a scope on how many objects do we have to craft and anticipate some possible constrains based on our goals. From this blockout we extracted specific references for chairs, counters and every visual needed. It was time to get into shapes. Shape development As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design. As every shape gets developed individually or along the other props in the same floor, the project starts to approach a critical state, where composition has to be remade or tweaked to have a beautiful interior and exterior design. After different iterations and redesigns we get to the final design. All running in engine. After we have all shapes made and initial composition set, it is time for us to go to the next step, design materials and apply them. Material design and Animation execution As we said before, the Nugget's News building look comes from a Modernist architecture with some art-touch made by us. This means that the building has to be mainly using "modernist-like" materials such as concrete, clean wood, painted steel, brushed grey iron and a little bit of marble. But we cannot forget this is a corporate building, so we have to mix those ideas with matching colours from the colour palette of the Nugget's News website. This is basically a palette of 5 blues and a coral variant made for contrast with the rest of the components. Both main ideas mixed together result in the following material (excl. Albedo dyes) Of course, creating materials is not a one shot job, we have to iterate and local-deploy constantly to reach the result we want and at the same time play with all the constrains, this means that we have to cleverly reuse all the materials along the building and minimize those that can only be in one place or are too specific. Apply different materials in different polygons set in the same mesh and etcetera. An example on material design evolution is the wall (and floor) of this room and how it evolves until it gets nailed. Regarding Animations, this is something I was kind of thrilled to see done, the idea with the Nugget's News immense logo being animated and morphing during the whole experience. We found pretty interesting that the shape of the logo has an almost continue tileable curvature, so we could play with it and invert the curvature as an animation, making the logo continuously loop non-stop by just morphing. A concept made to understand how the logo should deform, it exploits the fact that it already has a "natural" curvature and inverts it seamlessly. Technically it is something difficult to get right as the rig has to be correctly executed to drag properly the vertex by animation with the simplest model complexity possible. But once we have it right and complete the animation, we can have our hands on the next step, color and UV mapping. UV mapping this was something we came as the NN shape came along, as the logo itself is made out of triangles this benefits us and we can make the whole texture with a two gradients: one for blue to light gradient triangles and other for blue to dark gradient triangles. Determining info, labels and other communications Once the composition and all the visual aspects of the scene are ready to evaluation and have been checked in-engine, it is time to get our hands on designing information areas. We now basically are creating a good user experience by deep thinking and assess how the design will be experienced and ensure the best experience. This means to place everything in a way that is not tedious for the player.
A welcoming area, with out-links that take the user to other places related with the company, like Twitter or Web. Multiple panels that can be a calendar of events, a motif of the company or just general info about its activity. In the second floor is placed a second welcoming counter as the floor content is related but with a different mission, if we want the player to feel welcomed and understand what's going on this is the best way. Also there is again another link board so this connection is always close to the experience of the user. Different panels with different boards to explain specific materials and contents about the company are placed along the second floor in independent cubicles. Each cubicle has its own "Learn more" so the player goes directly to the expanded content related to what it's being told. A mid-floor between Floor 2 and 3 was set to amplify the use space and maintain everything airy. During the whole displacement along the levels, you can see where are you heading. A general view of the conference area, spacious and set to gather big meetings with announcements or "Ted talks". And a viewpoint on the same stage. Conclusion Once everything is in place and with the correct information set, there is nothing more to do but to upload our content to the metaverse and let the people experience it. This project has given us a fantastic opportunity to create a headquarter that had no relation with our daily work, as we are involved indirectly with cryptocurrencies but we focus in 3D developments and experiences. I do really hope to get my hands again on such different things as building the worlds others imagine. Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman. Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
What is the Static value If a GameObject does not move at runtime, it is known as a static GameObject. If a GameObject moves at runtime, it is known as a dynamic GameObject.
To apply only one type of static to an object is as easy as clicking on the dropdown arrow next to Static and mark all your desired values. Static can make a huge difference in how performance behaves in Unity, is a 101 in optimizing your work and getting it running into potatoes. The whole Static well used is the secret for smooth fps. In the Game view we can find a little tab called "Stats" there we can see how many batches are being processed by frame in the current FOV of the Camera set in scene. Without Lightbakes, Occlusion Culling, Batching Static and others this could easily rise as every feature needs a single calculation by object. This means (among others): calculate mesh visualization, apply material (1 batch per shader pass), lighting (if realtime 1 batch per object), realtime effects like particles, skybox... So you can get that Static-ing your objects is crucial, specially the one that tells the engine to don't modify anything from its render/transform: Static Batching. Before we get our hands on Static Batching, we will unravel the different types and do a brief explanation and usage:
You can find information about creating a good Occlusion Culling system in VRChat here: The Static Batching in the Gauguin Avatar Garden The Static Batching tells the Engine to combine similar meshes in order to reduce its calling count. But what is similar for Unity? Unity will combine all meshes that have the Static Batching checked and have the same materials, in order words, Unity engine combines by material. This reduces notably the amount of times its calling to render a mesh without killing its independence as an object. Combines the mesh but still considers all the GameObjects and their respectively components (Transform, Physics...) but they won't move. So Animated objects are off the table as they are Dynamic Objects and should be performance improved using other ways. As an example, here in the Avatar Garden we find that we have repeated along all the environment a set of assets that vary in Transform values but remain the same in all the others (Material settings), all these objects are marked as Static so Unity only renders:
... and so on with every Material match between all the Static Batched GameObjects. The result of Static Batching led Unity to combine all the meshes that used the same Water material or combining all the meshes that used the ground material. If you animate a GameObject by its transforms and has the Static Batching checked it won't move on runtime. Meanwhile if you have an Animation based on shader values it will get animated. This means that Shader modification on runtime is still possible for Static Batched meshes. The river in the Avatar Garden is an animation of the values of the Material Shader. The engine is told to interpolate two values in an amount of time and it does it through the Shader, this allow us to make this type of effects and at the same time maintain low the draw calling rate. Also notice that the shadow doesn't get displaced during the animation, this is due to the UV space they are using. Standard shader uses the UV0 for input texture maps like Albedo, Metallic, Smoothness etc and so all the values modified during runtime only affect the UV0 channel of the model. On the other hand lightmaps make use of the UV1 and they are not modified during runtime. For more information on how we created the river in the Avatar Garden, check our article about it! Reducing draw calls To draw a GameObject on the screen, the engine has to issue a draw call to the graphics API (such as OpenGL or Direct3D). Draw calls are often resource-intensive, with the graphics API doing significant work for every draw call, causing performance overhead on the CPU side. To reduce this impact, Unity goes for Dynamic batching or Static batching as we stated previously, good practices to reduce draw calls include:
This is an example of Atlas packing, all the objects from the exterior are using the same texture maps and so they will be draw called once. The Avatar Garden uses tileable textures and so it reduces too the number of materials. For color depth and variation we used Lightbaked lighting. Conclusion The static value is a must in the improvement of performance during runtime in Unity. Well executed can heavily impact on all machines and can let people with low-end machines running high-end experiences. You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Additional sources: https://docs.unity3d.com/Manual/GameObjects.html https://docs.unity3d.com/Manual/StaticObjects.html https://docs.unity3d.com/Manual/DrawCallBatching.html Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman. Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
Please note that this is VRChat focused but is something Unity Engine related issue. When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge. What are Colliders Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the same Object’s mesh. Every time you get yourself into a game, the player moves or you have "physical" interactions with the environment such as dropping a glass or throwing a rock collisions are getting in to work and behaving as solids, triggers or gravity affected objects. This sorcery works under default parameters but as in every game-engine you can set up colliders at your liking to match your desired interaction results. This definition could be applied to being able to walk on the firm ground but sink progressively as you go into muddy dirt or enter a flowing river. Colliders are basically interactions between objects that send messages to the engine to determinate if the object is colliding, and therefore can't proceed to go through it; or its triggering and then it can enter the other collider volume and (if set so) send an specific message. Here we could state that the player is walking along the map and decides to enter the river, this literal information could translate into colliders design in a way that define at which height the player is walking on and triggers that are modifying the player speed and turn it slower as it goes further into the river. This does indeed happen in our Avatar Garden environment: As the player will walk through the river, they will pass through the water mesh walking over the river soil instead Types of Colliders The three types of Collider interactions present in Unity are Static, Rigidbody and Kinematic. Each has an specific use, for example Static colliders are for GameObjects that are not meant to move or are not affected by the Gravity. Rigidbody Colliders are meant to be for objects that have "forces" applied on them and therefore gravity (and other possible set up forces) affects them in each frame (unless they are in sleeping mode). Last but not least there are Kinematic Colliders meant to be for Kinematic bodies and are not driven by the physics engine, read more about kinematics here. If we recreate this mockup in Unity and press Play, the engine will make the ball fall and the rock won't move. To apply a 3D Collider component into an object, we have different options at our disposal:
By default the Mesh collider will pick the Mesh Filter assigned mesh as the Mesh to represent the Collider. The Rigidbody component goes apart of the Collider component, giving independency and control over itself and acting as an "addon" for the static collider. It does also state if the interaction of the Collider Is Kinematic or not. Applying a Collider to a GameObject is as easy as adding the component in the Inspector Window once you have your GameObject selected. It is located in Component/Physics/ but you can also search for it by using the keyword Collider. What does the Physics Debugger After we set our colliders into scene, the best way to previsualize and correct colliders prior testing is the Physics debugger. You will find this window located in Window/Analysis/Physics Debugger This window will make the colliders overdraw over your meshes like if it was adding a layer of semi-transparent objects with a color that matches a type of Collider. Red for static, Yellow for trigger, Green for rigidbody and Blue for kinematic colliders. Here you can check/uncheck to display the Collision Geometry and also you can be able to Mouse Select directly your GameObjects by their Collider Volume. This window will drop a couple of features to configure to help us out as much as possible to configure and size the colliders. You can change the colours to match the best ones for you, change the transparency or set a random to create variation between them. The Physics debugger is going to be your best friend to spot flaws in your physics prior playing or even after noticing errors while testing! Triggers in VRChat For anyone experienced enough in game development will know that in Unity to activate a trigger you need a C# script telling the engine what to do when one Collider Triggers another Collider. The Trigger bool in the Collider Components tells the physics engine to let the other colliders go through the triggered. This is not possible in VRChat due to the custom script limitations and so it manages trigger by its Event Handler. Just add the VRC_Trigger Script and the SDK will add the Event Handler. From this point, programming in VRChat turns visual and no real code is needed. Just to be aware that some stuff changes from place and it turns more "Artist friendly". To add a behaviour as a result of a trigger, just click Add in VRC_Trigger component and start configuring your interactions. There are so many that covering a general use of this Triggers is barely impossible. So yes, the sky is the limit. Just remember that this operations can impact performance badly if they turn out to be expensive to execute. Applying Colliders in the Gauguin Avatar Garden (100 Avatars) Colliders in the Gauguin Avatar Garden by Polygonal Mind are a mix of Box Colliders and Mesh Colliders because we wanted to keep it simple and under our control on some other collider volumes. But that is not a clear reference to understand why is like this. When you get your hands on colliders, the first question you have to ask yourself is: Why I'm doing a Collider? Followed by: What is going to do my Collider? This two questions are essential to keep your collision complexity as low as possible. As you will want to make the Physics engine as smooth as possible to avoid artifacts in the player collision. Gameplay guides collisions. There is no reason to create a collider for every thing in scene. Instead think on how the player is going to play (or how you intend them to play). The big box in the back is to avoid players from going out the scene, encaging the player is a good way to free them to climb whatever without thinking of getting to the point of breaking the scene. Once again, one of the best practices in doing game development but this time on Colliders is doing the work by hand. Don't let the engine do the math for you without telling exactly what's doing. Evaluating the best suitable collider in each occasion will give you tighter control over the process of debugging. For example this tree logs doesn't use a Mesh Collider to correctly match their shape when the collider comes to work, but why? There is no reason to spend a complex collision here when the player will just want to notice that there is a log in their way but nothing else. Another example on Collider design goes here, you dont need to create a collider for everything. If we would have decided to create a collider for each small rock, the player would notice little bumps when walking and would be very uncomfortable or at least it wouldn't match the playable vision we had. Instead the ground is a Mesh Collider of the same Ground Mesh and the grass is not collideable neither. And the last practical examples we are showing here, I want to point out that our trees in the Avatar Garden have not collisions on the top. Because any player can reach the high tree tops and no primitive collider worked good for the curvature of our model; we decided to create a custom model just to fulfil this Mesh Collider need. Other things that we decided to use Mesh Colliders where Bushes and medium-sized plants. This was because there was no form to use primitive shaped colliders for such shapeless vegetation. We tried to keep as simple as possible the shape of all the Mesh Colliders or activate the "Convex" option to reduce to 256 tris if it was higher. Conclusion In collision, when it comes to game development, physics, or at least basic physics are the second stage of an environment development so keep them always in mind when building your worlds! They can be a truly game changer on how the experience is felt and enjoyed. Keep it simple but also keep it clever! You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Additional sources: https://docs.unity3d.com/2018.4/Documentation/Manual/CollidersOverview.html https://yhscs.y115.org/program/lessons/unityCollisions.php https://docs.unity3d.com/2018.4/Documentation/Manual/RigidbodiesOverview.html Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman. |
Categories
All
Archives
March 2021
|