Blog |
Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
Please note that this is VRChat focused but is something Unity Engine related issue. When this article was written, we did not have yet our hands on VRChat SDK3 UDON system and this is mainly written for VRChat SDK2 and Unity General Knowledge. What are Colliders Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the same Object’s mesh. Every time you get yourself into a game, the player moves or you have "physical" interactions with the environment such as dropping a glass or throwing a rock collisions are getting in to work and behaving as solids, triggers or gravity affected objects. This sorcery works under default parameters but as in every game-engine you can set up colliders at your liking to match your desired interaction results. This definition could be applied to being able to walk on the firm ground but sink progressively as you go into muddy dirt or enter a flowing river. Colliders are basically interactions between objects that send messages to the engine to determinate if the object is colliding, and therefore can't proceed to go through it; or its triggering and then it can enter the other collider volume and (if set so) send an specific message. Here we could state that the player is walking along the map and decides to enter the river, this literal information could translate into colliders design in a way that define at which height the player is walking on and triggers that are modifying the player speed and turn it slower as it goes further into the river. This does indeed happen in our Avatar Garden environment: As the player will walk through the river, they will pass through the water mesh walking over the river soil instead Types of Colliders The three types of Collider interactions present in Unity are Static, Rigidbody and Kinematic. Each has an specific use, for example Static colliders are for GameObjects that are not meant to move or are not affected by the Gravity. Rigidbody Colliders are meant to be for objects that have "forces" applied on them and therefore gravity (and other possible set up forces) affects them in each frame (unless they are in sleeping mode). Last but not least there are Kinematic Colliders meant to be for Kinematic bodies and are not driven by the physics engine, read more about kinematics here. If we recreate this mockup in Unity and press Play, the engine will make the ball fall and the rock won't move. To apply a 3D Collider component into an object, we have different options at our disposal:
By default the Mesh collider will pick the Mesh Filter assigned mesh as the Mesh to represent the Collider. The Rigidbody component goes apart of the Collider component, giving independency and control over itself and acting as an "addon" for the static collider. It does also state if the interaction of the Collider Is Kinematic or not. Applying a Collider to a GameObject is as easy as adding the component in the Inspector Window once you have your GameObject selected. It is located in Component/Physics/ but you can also search for it by using the keyword Collider. What does the Physics Debugger After we set our colliders into scene, the best way to previsualize and correct colliders prior testing is the Physics debugger. You will find this window located in Window/Analysis/Physics Debugger This window will make the colliders overdraw over your meshes like if it was adding a layer of semi-transparent objects with a color that matches a type of Collider. Red for static, Yellow for trigger, Green for rigidbody and Blue for kinematic colliders. Here you can check/uncheck to display the Collision Geometry and also you can be able to Mouse Select directly your GameObjects by their Collider Volume. This window will drop a couple of features to configure to help us out as much as possible to configure and size the colliders. You can change the colours to match the best ones for you, change the transparency or set a random to create variation between them. The Physics debugger is going to be your best friend to spot flaws in your physics prior playing or even after noticing errors while testing! Triggers in VRChat For anyone experienced enough in game development will know that in Unity to activate a trigger you need a C# script telling the engine what to do when one Collider Triggers another Collider. The Trigger bool in the Collider Components tells the physics engine to let the other colliders go through the triggered. This is not possible in VRChat due to the custom script limitations and so it manages trigger by its Event Handler. Just add the VRC_Trigger Script and the SDK will add the Event Handler. From this point, programming in VRChat turns visual and no real code is needed. Just to be aware that some stuff changes from place and it turns more "Artist friendly". To add a behaviour as a result of a trigger, just click Add in VRC_Trigger component and start configuring your interactions. There are so many that covering a general use of this Triggers is barely impossible. So yes, the sky is the limit. Just remember that this operations can impact performance badly if they turn out to be expensive to execute. Applying Colliders in the Gauguin Avatar Garden (100 Avatars) Colliders in the Gauguin Avatar Garden by Polygonal Mind are a mix of Box Colliders and Mesh Colliders because we wanted to keep it simple and under our control on some other collider volumes. But that is not a clear reference to understand why is like this. When you get your hands on colliders, the first question you have to ask yourself is: Why I'm doing a Collider? Followed by: What is going to do my Collider? This two questions are essential to keep your collision complexity as low as possible. As you will want to make the Physics engine as smooth as possible to avoid artifacts in the player collision. Gameplay guides collisions. There is no reason to create a collider for every thing in scene. Instead think on how the player is going to play (or how you intend them to play). The big box in the back is to avoid players from going out the scene, encaging the player is a good way to free them to climb whatever without thinking of getting to the point of breaking the scene. Once again, one of the best practices in doing game development but this time on Colliders is doing the work by hand. Don't let the engine do the math for you without telling exactly what's doing. Evaluating the best suitable collider in each occasion will give you tighter control over the process of debugging. For example this tree logs doesn't use a Mesh Collider to correctly match their shape when the collider comes to work, but why? There is no reason to spend a complex collision here when the player will just want to notice that there is a log in their way but nothing else. Another example on Collider design goes here, you dont need to create a collider for everything. If we would have decided to create a collider for each small rock, the player would notice little bumps when walking and would be very uncomfortable or at least it wouldn't match the playable vision we had. Instead the ground is a Mesh Collider of the same Ground Mesh and the grass is not collideable neither. And the last practical examples we are showing here, I want to point out that our trees in the Avatar Garden have not collisions on the top. Because any player can reach the high tree tops and no primitive collider worked good for the curvature of our model; we decided to create a custom model just to fulfil this Mesh Collider need. Other things that we decided to use Mesh Colliders where Bushes and medium-sized plants. This was because there was no form to use primitive shaped colliders for such shapeless vegetation. We tried to keep as simple as possible the shape of all the Mesh Colliders or activate the "Convex" option to reduce to 256 tris if it was higher. Conclusion In collision, when it comes to game development, physics, or at least basic physics are the second stage of an environment development so keep them always in mind when building your worlds! They can be a truly game changer on how the experience is felt and enjoyed. Keep it simple but also keep it clever! You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Additional sources: https://docs.unity3d.com/2018.4/Documentation/Manual/CollidersOverview.html https://yhscs.y115.org/program/lessons/unityCollisions.php https://docs.unity3d.com/2018.4/Documentation/Manual/RigidbodiesOverview.html Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
0 Comments
Premise VRChat is a well known social platform for all those who have VR equipment, and even if you don't have any specific hardware, you can join the party from your standard PC screen too! When you create a scenario, you have to be cautious with texture sizing and data management to make sure you can run your scene, but also the target audience can too! If you fail to democratize hardware requirements then you fail to create a popular VRChat World.
#00 What is Occlusion Culling Occlusion culling is a process which prevents Unity from performing rendering calculations for GameObjects that are completely hidden from view (occluded) by other GameObjects. Every frame, a camera perform culling operations that examine the Renderers in the Scene and exclude (cull) those that do not need to be drawn. By default, Cameras perform frustum culling, which excludes all Renderers that do not fall within the Camera’s view frustum. However, frustum culling does not check whether a Renderer is occluded by other GameObjects, and so Unity can still waste CPU and GPU time on rendering operations for Renderers that are not visible in the final frame. Occlusion culling stops Unity from performing these wasted operations. https://docs.unity3d.com/Manual/OcclusionCulling.html This is basically the core knowledge of the whole system. This technique is used to avoid doing the real time calculations of gameObjects that are not in the Camera frustrum. This improves framerate and events performance during runtime. #01 How do you apply it to your scene To begin creating an "occlusion area" you need to check the box "static" or just click on the drop down and toggle "OccluderStatic" and "OccludeeStatic". Another approach is to select the desired gameobjects and toggle option in the Occlusion Window on the Object Panel. This tells the engine to consider the gameObject when calculating the Occlusion Data within its Occlusion Area (Unity considers the whole scene as a single area if you don't configure one prior bake). Occluders and Occludees The main difference between these two Occlusion concepts is pretty simple but its important to keep it in mind when building your scene occlusion areas and data.
An example of Occludee toggle would be for larger objects like grounds that should be considered separately to ensure its always rendered. #02 Culling Portals and Culling Areas Culling Areas are "cube" shaped volumes that "group" all the gameObjects inside of it, being only rendered if the camera is placed inside the same area. This works well if you have multiple enclosed areas, for our case, Occlusion areas didnt make sense as the whole scene is enclosed without visual walls among it. Occlusion Portals are for connecting two Occlusion Areas and so the Camera can render both areas by the Portal region area. The toggle Open option is for allowing or disallowing this conneciton. #03 Alternatives to Unity's Occlusion Culling system The Occlusion system uses a built-in version of Umbra. As any other system, it has its failures and improvements compared to other occlusion system engines. For other projects I personally have worked with Sector, an Asset Package found in the Asset Store that is very helpful and by the time I worked with it it was way better than the Unity's Umbra (more flexible settings as its main selling point). Another thing to keep in mind is the use of shaders with an excess of passes. Each pass is a whole mesh calculation for the material to be rendered and so Materials with more than two passes can be problematic for lower platforms like mobile. I state two as a minimum because transparent materials require two passes, furthermore they require the renderer to render whats behind the mesh rendered with transparency so they are quite the hard limit for low platforms. Copy of Batch example
Please keep in mind that "static batching" meshes get combined during runtime by the unity engine and so reduce the "meshrender" batching but keep the mat batching. #04 Occlusion in the Gauguin Avatar Garden The whole scene is marked as "Static" as there are no dynamic objects to keep in mind (the water is animated through material [not shader]). This made the Occlusion set up "easy" and not very hard to set the first steps. Keep in mind the size of the Occluder box you want to set, the bigger, the less "accurate" it will be, but at the same time the data will be much smaller. Each project needs its own balance. In this case for Gauguin we set the size to 1.5, meaning that the smallest "box" packing objects was of 1.5 units (meters) in x/y/z value. The Smallest Hole float is the setting to tell the camera how big it has to be the "hole" in the mesh to start casting what is behind it. This is specially tricky on elements with small holes or meshes with complicated shapes. The backface value is the value of directionality of a mesh to be rendered. The higher, the more "aggresive" the occluder will be, making the camera not compute meshes that are not facing towards the camera. Note that all the "black" shadows are objects that are not getting rendered as their lightbake remains on the mesh that is being rendered. Furthermore you can see the "area" that the camera is in with the correspondent portals. When there is none in scene Unity creates them for you. The best workaround is to always do it manually and never let the program do the math for you. For the scene, the ground meshes were kept without the Occludee option as smaller avatars made it through the ground floor due to camera frustrum and its near clip (this cannot be changed as it how it goes in VRChat). #05 cOcclunclusion You may find Occlusion Culling easy to set up or even unnecessary! But the truth is that is a vital piece during the final stages of environment development as is the manager, loader and unloader, of all the visual aspects of the camera, ensuring an smooth experience while mantaining the quality levels desired and keeping hidden from view but not unloaded from scene to ensure fast-charging-fast-hidding. Also each time you modify a gameObject property like transform or add/remove gameObjects from scene you should rebuild your Occlusion data as those gameObjects are still "baked" in the data. Keep it in mind specially when working with large environments or low-specs platforms. You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Isn't it great when you talk with somebody online and you see his mouth moving while he talks? It really add ups to the experience, specially in Virtual Reality. That's what this is about. Creating different shapes so you can see yourself talking when you look at a mirror. It's the little touches that makes something good to something better. Let's say you already have your model done, it's also rigged and skinned so its ready to go. But, you want to make some blend shapes because in-game they look neat and funny. Well, let's make them! First, we need to know how many blend shapes we need to make. VRChat uses 16 different blend shapes. These are:
To make things easier in the future, I highly recommend always using the same prefix for each name, so later in Unity it's almost automatic. The prefix being vrc_v_blendshapename. This gives you a general idea of how I made the different shapes of the mouth depending on the viseme. Another thing to keep in mind is that even if vrc_v_sil doesn't change the shape whatsoever, you must change something regardless. Now that we have every shape done, we will use the Shape Editor. Open the Shape Editor in the sculpting tab in Maya. Or going to Deform>Blend Shape. Now, select one shape that you created and then select the original model. Go to the Shape Editor and click on "Create Blend Shape". Repeat this with all the 16 shapes. Export and import We have every shape ready, so now we will export all the package. Select all the shapes, meshes and bones and go to export. Be mindful of checking in the box of Animation and make sure that Blend Shapes is activated too, if it's not, it won't export correctly. Now write the name you want and export it. Upload You should have your Unity 2018.4.20f1 or whichever version VRChat uses already set up. If you don't, check this guide out made by my friend Alejandro Peño where he explains how to set it up: With the character imported, we will add a new component called VRC_Avatar Descriptor. Now it will appear a couple of parameters you can edit. We are only going to modify 3 of them. View position, LipSync and Eye Look. View Position This parameter allows you to decide where the first person point of view is located. In other words, from where are you going to see inside VRChat. It is a no-brainer that we should put the little indicator at eye level. As close as posible to the eyes. Lip Sync How can we make our character talk? With this option right here! In mode, select Viseme Blend Shape. Now it will appear a Face Mesh tab. Using the little circle on the right, you can select the mesh where the Blend Shape Visemes are stored. In this case, since it's all the same mesh, we only have one option. Now we are talking (pun intended). Like I said before, putting the right names, makes our lives easier. Every single blend shape is in place. But just to be sure, give it a look. Eye Look If you have sharp eyes, you might have realized that blink was nowhere to be seen (These puns just keep coming). That's because we will use the Eye Look tab to configure it. Click on Enable and a couple of options will appear. Ignore the others and go to the Eyelids section and select the blendshapes option. Select once again the mesh where the BlendShapes are stored, and it will appear something like this. If something is not properly added, you can change it from here. Since we don't have only the Blink Blendshape states, we will leave Blink like it is and change the other 2 so they don't have any state at all. Like this: PRO TIP Use the preview button to make sure that everything works correctly. You can even check all the other blendshapes if you want! Once it's finished, you can upload the character like you usually do. Again, if you don't know how to do it, you can check this guide: Conclussion Blend shapes visemes are a great way to give life to your avatars in VRChat. I would 100% recommend using them in your future avatars. Depending on the model it takes around 30 minutes to an hour to create all the shapes needed, and they look great. It's a lot of fun making these, so give them a try! Pedro Solans 3D ANIMATOR Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
Getting the rig Since we want humanoid avatars, the best way to get a fast rig is using Mixamo. Mixamo is an automatic rigging website tool that allows you to create quick humanoid for free. I won't cover how to use Mixamo, since we already have that cover in this post here: https://www.notion.so/polygonalmind/Fix-and-reset-your-Mixamo-rig-Pedro-eccd01b2095545749e0a3d2a3e573558 But I will explain how to use all the tools that I used when rigging almost every of the +200 different avatars we have made for the 100 Avatars project. So tag along, because the world of rigging is one where patience is KEY. Avatar imported You have the avatar on your Maya proyect ready now. There are a few places where you have to take a closer look since these are the most problematic areas. These areas are shoulders, armpit and hands. Depending on the character you might have to take a look at other places, specially if it is a complex character. Ask yourself; are all the bones where they are supposed to be? In this case... no. Using the X-Ray Bones options you can easly see where each bone is inside the body. In this case, the shoulders aren't where they should be, so, how can we move them? With a really useful tool called Moved Skinned Joints. Go to the Rigging tab, and then to Skin. Almost at the bottom, you should find the tool. Click on the square on the right and then on any joint. Now you can move them freely without any problem! Use it to move the shoulders where they should be. Now it's time to skin!
Value is the value you set the influence of the brush. More means more influence. The maximum is 1 and the least is 0. The Flood button makes every selected region get the value you selected. With this explanation of the tools, you should have a good idea of how to skin a character inside Maya. Now, skinning is not an easy thing, at least to make it right. It requires a lot of patience. A couple of advices I can give are, try to use the 1 value as little as you can. You should also use the smooth option since it really is unvaluable. Dont be scared of rotating bones. Aim to get the cleanest breaking point in your mesh. Conclusion Remember to check those zones I wrote about earlier and have fun! Skinning is an important process and takes time. The more you practice the better you will become! What now? If you want to see what's the next step, read my post about how to make visemes for your avatar and configure it inside Unity! Pedro Solans 3D ANIMATOR Junior 3D Animator improving every day possible. Videogame and cat enthusiast. Premise With the continuous rising popularity of VRChat and its standardisation as a virtual meeting point, Polygonal Mind contributed to its growth by creating 100 avatars available in the platform. This took the team to think about a "room" in the virtual world of VRChat where you could hang out with friends and change your avatar to one of our creations.
Background of the issue Creating a visually impressive scenario need the use of a high dose of creativity. And so Laura Uson lead the visual development of the scene, taking the impressionist painter Gauguin as the main inspiration for our environment. The vivid colors of the paintings and the simplicity of the shape direction and vague definition is one of the main highlights of our environment. In this project I took the lead in general composition and technical development, this meant to be in charge of all the optimisation and general look alike by compositing and creating a good looking environment by level design, lighting and colour harmony. And one of those tasks to perform was to have a water flowing among all the static-ish environment. To do so we heavily relied on Seamless textures getting animated to simulate water flowing. #01 - Seamless textures can create flowing rivers It is true that seamless textures are a great way to achieve massive texture distribution without feeling that there is some kind of repetition grid and everything looks overall uniform. This type of textures also work great to make flowing rivers or fluid movement. Said so make shape direction your main ally for flow movements, think about a "starting quad" and an "ending quad" where water will flow along the loop. The 3D quad adaptation to an square texture also gives you the ability to shape the “stress” of the water flow by enlarging or stretching its area in the UV space. If you stretch the quad in the UV space you will get a quicker and stressed water flow. On the other hand if you enlarge its length, you will get a calm and slower flow of the water path. This technique can be applied to a wide range of standard water flows like broken pipes or waterfalls. In order to achieve this, we created a seamless square texture along the Y/V axis so the river would follow this pattern vertically, and so the animation goes from a Texture tile offset from 0 to 1 during X time. This way your "simple trick" animation will be always flowing seamlessly. Although in the previous texture tile preview you can notice the "square" repetition due to small details in the texture, this cannot be appreaciated in the river mesh, as its quads only match the X/U vertical straight boundaries, while the horizontal edges mantain an unfolded freedom. This makes the texture to not necessarily match the whole square as its texel density is uniform along all the mesh. Of course this was our desired approach, you can always match the full square for different visual appealing results. #02 - Seamless textures can create flowing oceans The river came firstly, as its a simple UV animation logically made and thought, but we faced that the ocean couldn't be animated the same way as its flow doesn't behave the same way. The main issue was to "imitate" and do a visual ressemble of the ocean waves in a calmed way, without creating wave meshes. The attained look was inspired by the following Gauguin painting: From the picture from above we can already see how the ocean shape and look a like was made, each "wave" was made by creating a river that went along the shore to the limits of the map, creating a water flow that ressembled to the sea. At this point you can already guess that you cannot animate it the same way as it is animated the river, in this case the animated axis is the X/U axis, moving the foam closer to the shore. This could be achieved by doing a seamless connection between all the faces, this time each quad occupies the whole UV region, creating a seamless mesh deformed by its own mesh. The stress points now are made when there are more (and smaller) polygons instead of smaller UVs. And that's all about the project breakdown regarding this particular environment effect and animation. The next two points are going to be focused on creating this effect step by step. #P1 - Ok I made a river, How do I create proper UVs? Once you have the initial River mesh you will probably have a UV mess. Because how Maya behaves, UVs get deformed as the Maya engine performs the commanded operations. My recommendation is to delete all the UVs and select the path loop the river will follow, you should get something like this by using a contour stretch: As you can see, with the Maya History deleted and the shape Frozen, an one-opened loop will be displayed like this when performing a Contour Stretch. You can say that it still doesn't look right (because the whole mesh has been stretched to the coordinates 0,1 in the UV axis) but it has the water flow direction already. From this point the best forward step to do is to scale in the V axis the whole UV shell until finiding the desired texture stretching. Remember rotate the sheel and align the last quad of the river as the highest point in the UV coordinates, this way the animation value towards a positive axis will get displayed properly. Otherwise the river would be climbing up the waterfall. With the UV length expanded along the V axis you now have a proper seamless water flow in a river, good job! Now let's take a look to the next point. #P2 - Now I want it to flow, Animation. One of the many benefits of Unity is its simplicity when it comes to animation. Unity is an engine capable of animating through its animation engine almost any public property or value displayed in the inspector of a gameObject in scene. All you need is to create an Animation Controller in one of your Assets folder, create a default State and create an Animation clip assigned to it. Once you have done this, you can start to create and edit your water animation in the Animation Window (CTRL+6).
With all these Unity basics you will have a beautiful (and maybe simple but effective) water flow. From this point everything is ad-on and there are infinite ways to keep experimenting and improving this workflow, it will always depend on the artistic and visual approach you are trying to get. #04 Conclusion With all the things said, I will pack my tools and do my last paragraph as a farewell. This was my very first time doing a river flowing, sea waves approaching and a waterflow being alive, but I have been already experimenting with texture offsets for a wide range of other applications. Because it's more important to know the useful basics and tech tricks instead of very particular cases of technology and graphics applications. Experimentation is key in this world! That's all you need to start experimenting with simple water flows, there is no real need to use Particle Systems or complex Shaders, with the correct texture and clever usage of UVs you can achieve amazing results that at the same time will do a less impact in performance. You are more than welcome to drop any question or your try-out and results! Join us: https://discord.gg/UDf9cPy Kourtin ENVIRONMENT ARTIST I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Bringing 2D into 3D Momus Park was the first time we tried this with surprising results. For those who don't know, the majority of the textures used are based on the The Starry Night from Van Gogh. Trying to imitate the spirals and the brush strokes from the sky and the city on the models. In this project, we took these ideas and methods a step further, by not only trying to recreate the texture but also the looks inside the painting. Creating in that way a world in which wherever you went or looked, you experienced the sensation of being inside a painting, uniting 2D and 3D. Actually, this is not a new concept, however most of the 3D works that try this method are static images or videos, an impressive technique and design without a doubt but it's a shame that you can't go around the world they created.
Where to start Once we have limited what we want to do and the feeling we want to get, it's time to gather references and styles. In this case, we were looking for artists that had painted open aired spaces and gardens, since the main theme of this world it's called Avatar Garden. Meaning that the player have to have places to walk throught, select the different avatars and relax zones where the players can meet with other people. The other condition that we put ourselves was that it had to be a classical painting, since we were connecting the 3D and the 2D, we thought it will be interesting to join a classic art medium with a new one. Putting this two things together inmediatly gave us the thought of the impresionist movement, with artists such as Claude Monet or Édouard Manet. This style has a lot characteristics that make it ideal for this, the textures have a lot of personality with a strong presence of brush strokes and spots of colours, making it really easy to create tileable textures. Finally after reviewing a few artists, we decanted ourselves for Gauguin and his tahitian landscapes. Take notes Once we have an artist selected, it's important to take notes. Each artist has a particular style that took years to develop, and if we want to recreate their paintings or artworks, it's a must to try to imitate these details. In this case we can pinpoint a few things from the get go that will helps us in order to recreate it:
When we have these main points clear it's time to prepare the props and the assets. To get started, we created some of the most basics props to fill a simple scene. That way we can define more details and polish the models even more. This also helped us to develop a pipeline that allowed us to work more efficiently, which more or less consists of this:
Finally, in the spirit of Gauguin, we decided to change the world to an island instead of a garden. Improvise, Adapt and Overcome Creating this pipeline gave us an idea about what we will need to make, however it's not as easy as that. As always happens in these kind of projects, we found a series of problems. Especially when adapting some of the assets into the paintings style. There were times when we had to reject some models because even though the final result may look impressive, the time spend on it, the resources it took or the number of tris, made the process not worth it. For example this tree below (reference on the left), to recreate it we put a base on the leaves with a green texture, and them we recreated the yellow strokes with transparent planes. On the right you can see the tree in the process. Sadly we had to drop it because of the reasons stated above. Other example comes from the vegetation, in particular from the bushes, since the forms are not very detailed, if took a lot of tries until we achieved a satisfying shape, it was a difficult time to make because a lot of times they resembled deformated spheres. Since this was a necessary model, we repeated the model until we achieved the desired result. The most important lesson from this point is that not everything comes as one imagines it, so when you arrive to this, is important to stop and think about it. It is necessary? it is taking a lot of your time? how can I change it? This is normal, so it's important to improvise, adapt and overcome. If you keep going you can finally obtain the desired results, as you can see: Conclusion While working on this project we discovered that it was a good way to test our limits, and what we can do inside an enviroment project. We tried to recreate a paradise that Gauguin viewed when he first traveled to Tahiti. All in all, I hope to see you there. Laura Usón 3D ARTIST Passionate about videogames, movies and creatures. artist by day and superhero at night. |
Categories
All
Archives
February 2021
|