Polygonal Mind
  • Home
  • SERVICES
  • Projects
  • Blog
  • Assets

Blog

How to optimise VRChat avatars for Quest

10/3/2019

Comments

 
Premise
Back in May Oculus quest was released, it is a standalone device that allows you to use VR without needing to use any PC or wires. Until then you needed a high end computer to run games or experiences on VR so developers and creators didn't had to reduce as much when creating content or avatars for VRChat.
The Mission
The Quest has proven to be a very successful headset so far so there're new Quest users on VRChat, we'll use our 100 Avatars to show you how we reduce and optimise our avatars so Quest users can use them in VRChat.
Resources
  • MayaLT
  • Zbrush
  • Photoshop
  • Mixamo
  • Unity (latest stable version)
Butter avatar in vrchat 7eleven
Before starting

During 2018 Q4 Polygonal Mind's team made a challenge of making 100 characters in 100 days, you can check more about it here.
Me and my friend Alejandro Peño joined the studio as interns and were tasked with a project where we had to prepare, optimise and upload over 100 characters to VRChat for the Oculus Quest.
It was challenging workload but through consistent job, we were able to transform this characters into optimised avatars for VRChat.
Skull avatar selfie into vrchat
Some characters have proven to be more difficult than others, so I will make sure to explain you what problems I faced when fixing non optimal characters and how I managed to solved them. Even though we used Maya in the studio, any of this knowledge is applicable to any 3D modelling software.
​
So I'll recap a series of problem I faced when setting them for VRChat.
Lets start optimizing

Initially most of the models weren't created with VRChat in mind, and even less with the Oculus Quest.

Thankfully, all the characters were already modeled and texturized, so we only had to rig and upload them.


Or that's what I thought...
Cool Choco selfie into vrchat
VRChat team provided the following rules to follow when it comes for Quest avatars:
​
  • 1024x1024 texture
  • Less than 5000 tris
  • Preferably only 1 material
  • Around 5 or 8 MB uncompressed
Step 1 - Reducing Textures

This might be the easiest of all steps.
​
All the characters used 2048x2048 textures. So we had to reduce them to the desirable size.
In Photoshop, we created a new project with 1024x1024 pixel resolution. And then we imported all textures. Once they were all in and adjusted to the box, we exported each layer as an independent PNG.
1024 textures edit into Photoshop
Export texture images as PNG
Since they already had the appropriated name, we had 100 textures ready to go.
100 texture files png
Step 2 - Polycount reduction

Most of the models had the right poly count, but some others didn't. 
​Franky's head is a clear example, it had 12572 triangles.
It is tempting to use an automatic tool to reduce the polycount such as decimation, but we do NOT recommend this at any cost. It will likely destroy your edgeflow and also can destroy the shape of the model, also can break UV layouts, so please think it twice before using it.
Franky frankenstein avatar by polygonal mind

Here are some rules we follow when it comes to reducing polygons:
​
  • Always keep the shape intact. In this case the head must keep being a head with it's rectangular silhouette. After deleting an edge, zoom out and rotate the model to check if everything keeps looking good.
​
  • Don't delete edges where there is a texture seam. This will cause the UV map to go bananas, and that can take a lot of time to get it fixed, and even then, the results can look really bad.
​
  • It's crucial to reduce trying to keep the same edgeflow that the model already has, deleting edge loops left and right without paying attention to the edgeloop will lead you to sub optimal results.
Franky avatar high poly
Franky avatar low poly vrchat
... wait... What if the maps seams are non optimal?

What can you do when there are map seams literally everywhere? That's what happened to the 50th character, Samuela.
Samuela avatar polycount vrchat
As you can see on the left no matter what edge loop you tried to select, there was always a seam. We couldn't find a way to reduce the number of tris (7.5k) without having to re-do the texture again.
​
But, thankfully we learned a workaround for this model.
We duplicated the model, and started deleting edges without thinking too much about the seams or texture, since we were going to make a new UV layout once the model is reduced.
Once the poly count was around 4.5k, we made the new UVs. Now that's all done, we exported each separated part (eyes, hair, ponytail and hair) of both, the new and the old model.
3d files obj export
Once in Zbrush with every mesh and the old texture imported, we took the old Samuela model, subdivided it and made the texture to be poly paint.
Picture
Beware, Zbrush applies color to the model's vertex, so you will need to subdivide your model until it reach a million of points so you can keep as much detail as possible of the texture.
Time for to project the high model polypaint into the new one by subdividing the number of polys of the new to match the old one and now simply project the old Samuela to the new one. Repeat this part for every subdivision you have until you get enough texture detail on the new model.
zbrush divide
Samuela textures UVs
zbrush project all
Samuela vrchat texture uv
Note that projection might not be precise and you might have to improve the texture in Photoshop.
Adding mouth and eyes into an existing model for Visemes

This part is completely optional. But it really gives your characters life when they are in game.
VRChat allows you to add facial expressions to your avatars so we had to add a mouth and eyes for every character so we can make them blink and talk using blend shapes.
​
Some characters already had eyes and mouth, but others, like Manuela had them embed on the texture, so we had to create them.
Samuela eyes and mouth add
For a quick turnaround what we did was:
​
  • Extrude the mouth inside to make a hole
  • Modify topology if needed
  • Adjust UVs maps to match the lips and the mouth's insides
  • Add some teeth and tongue to the insides
  • Please keep in mind the 5k tris limit.
Rig

For the rig, we used Mixamo. Mixamo is a web page that rigs and skins automatically given some variables like the position of the wrists, elbows, knees, chin and groin. For the most part, Mixamo did a pretty good job, specially for all the humanoid characters. But for the not-so-human, you had to edit the skin to have a great result. How to fix those is a topic for a different dayone .
We'll talk about this deeper in a future post.
Materials

Like many of you reading this we firstly uploaded the characters to VRChat thinking only for PC users so all the materials were left with the Unity's default shader, but Quest avatars requires a mobile diffuse shader, so we had to change them.
​
If you have followed a good naming convention, this will only take 1 minute. For example we add a mtl_ prefix to all our materials. In Unity type the material prefix to quickly select and change them all at once.
Unity 3D textures avatars vrchat
Unity material
Conclusion

100 characters are a lot. But like I said earlier, with some structure and consistent work after 3 weeks, we made this happen. At Polygonal Mind, we use Notion.so to have all our projects and task organised.
​
With that being said There were a bunch of characters that needed little to no optimisation, but some others that needed almost a full rework. This stuff takes time. Especially if you count them by the hundreds.
​
I hope this guide helped you to optimise your avatar for Quest users, it was a challenging project for us, but the work pays off very quickly once you see players wearing them in game.
​
So sit back, put on some music, and start working. It's been really fun making these and the paid-off of seeing avatars you've been working on being used by other people is a great feeling.
Hot Dog avatar VRChat 7eleven
Post by:

Pedro @05predo

Pedro Solans was an intern and now junior animator working at Polygonal Mind's in-house team.
Animation film lover, Tekken player and animal huger .

Daniel @toxsam

Daniel García (aka Toxsam) is the founder and creative director at Polygonal Mind.
In love with videogames since he can remember, passionate about geometry,
​VR addict and energetic persona.

Comments

I'm LODing it (Level of Detail)

9/16/2019

Comments

 
The Mission
Resources
Creating LOD meshes for game engines
​in a fast and reliable way.
  • MayaLT 2019
  • Unity 2019
  • A mesh you want to LOD
First of all, what's a LOD?

LOD (Level Of Detail) is a method of game optimization that decreases the complexity of a 3d mesh (or lately even shader, textures, etc.) as it gets further away from the player, and it's usually used in conjunction with other optimization techniques like culling. The most common execution of this method consists of utilizing a secondary mesh, which replaces the original with a lower resolution one at a certain distance to avoid having too much unnecessary detail. The initial mesh can be generated automatically, but you will have to fix stuff manually!
LOD levels into Maya 3D
Using MayaLT to create automatic LODs

Maya has a built-in tool that lets you create automatic LODs based on either camera distance or % of the total poly count. You can access it in Edit > Hierarchy > LOD (Level of Detail) > Generate LOD Meshes. Click the box right next to this last button to get access to the options instead of the defaults.
generate LOD meshes with Maya
These are the settings we've been using for our latest project, which you'll know more about soon!
In many cases (most of the time, to be honest) you'll just use this as a kickstart and improve upon the automatically generated mesh.
Maya generate LOD meshes options
General Workflow

1. ​First, duplicate the mesh we are going to LOD and hide it. This is the equivalent to duplicating the background layer and working in a regular layer in Photoshop, just to make sure we keep the original material in case something went wrong.
Boat mesh into Maya
2. ​Then use the tool to create as many LODs as you need.
Maya tool to create LODs
3. ​Extract the meshes from the LOD group by unparenting them to examine and fix any problems
​you could find, individually.
Maya extract LOD meshes
4. Apply the material of the original mesh to the LODded one, so you can see how close to the original mesh's look it is.

5. After that, follow the troubleshooting steps below to fix the problems you caught.
​
6. Finally, when you're happy with the result, rename everything to your chosen naming convention
​and export.
Troubleshooting

  • Imperfect autoLODs: As I said before, the automatic tools are not perfect. Thus, is up to us to fix the computer-generated errors. Specially round or cylindrical surfaces are very hard to get right on the first try, and are what usually will need some more love from the human side.
Original LOD mesh
Original
Maya automatic LOD mesh
Automatic LOD
Modified LOD mesh maya
Modified fixed LOD
  • UVs got distorted: automatic LODding usually maintains the UVs for the most part, but if you have to make any manual changes to improve it (SPOILER: YOU WILL) you'll need to ​fix the UVs to match the original mesh's, especially when utilizing Target Welds, Merges and some other similar operations.
UVs automatic LOD Maya
UVs LOD modified
  • Popping: When a LODded mesh is very different from the original one, you can see what's called pop, or popping. Our job is to make this transition as seamless and smooth as possible, by using more LOD levels. The idea is to balance optimization with good looking LODs (so the player doesn't notice the lowpoly model too soon).
LOD change when camera go further
Using LODs in Unity 2019

Unity has a built in LOD system. First, you need to have the correct hierarchy for it to work correctly. Import your mesh and the LODs of that mesh to the scene, create a new empty GameObject, put the original mesh and its LODs inside the empty GameObject. Then select the parent > go to the Inspector Panel > Click on Add Component > search for "LOD Group"
LOD groups into Unity
LOD group Unity 3D
You'll then see 4 slots, for different LODs. You can add and delete as many as you want by right clicking over them and selecting "Insert before" or "Delete", respectively.
Edit LOD group into Unity
You can now assign a mesh you have prepared before to every LOD by clicking on the big square "Add" button by having previously clicked on the desired LOD.
LOD example into Unity 3D
Also, by dragging between the transition of the LODs from left to right, you can set up the separation of the different LODs based off the distance. By dragging the camera icon you can preview the system working as it would happen in real time when the player was getting closer or further from the mesh.
Camera movement for LOD meshes into Unity 3D
The Conclusion

As it happens with any tool, if you just leave the results that got automatically generated the result is going to be a lot worse than if you tweak it a bit to fit your needs (for example, just drag and dropping Smart Materials vs actually understanding material layering and utilizing Smart Materials to accelerate your texturing process in Substance Painter or Quixel).
​
The beauty of this is the combination of the automatic process and the human input, generating a faster mesh than doing it all manually, but getting a better result due to the fixes done by the user.
Post by:

Alex @Vontadeh

Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.
Passionate about videogames, vivid tutorial drinker and cat lover.

Comments

The challenge of making 100 Avatars in 100 days

6/6/2019

Comments

 

The Mission

Creating content on a daily basis and keeping the steady work for a long period of time is both daunting and exciting.
On this case study I'll explain how a silly summer idea evolved into a social media challenge and then into cool avatars that thousands of people are using in VR right now.

Resources

  • Zbrush
  • Unity
  • MayaLT​
  • Camtasia
  • Mixamo
  • Notion
  • VRChat
  • LIV

The impact

Having the avatars being featured by VRChat.
Unity's social media account reposting our content during the challenge.
Increase of the followers both on Instagram and Twitter.

Idea Background

It all started back in September of 2018. I wanted to start investing time into develop new game art styles using Unity, I was not sure how to start but then I found about the 100 days drawing challenge by Amanda Oleander on her Instagram.
Her challenge and commitment inspired me so much that I did my own version of it by doing 100 characters, 1 character a day for 100 consecutive days.

The condition for me was to make an Instagram and Twitter post with a moving character every day, so I had to create a steady workflow that could work for all the days of the challenge.

The challenge process

To succeed on a challenge like this you should try to define a process, and try to follow through it everyday, this will help you focus and will slowly reduce the time you have to dedicate to the challenge overtime, since your brain will be learning and adapting to the tasks.
If you don't know how to set up a process, is okay, most of the times processes are the result of repetition, so just start, do it once, and then write down the steps you made to get to the final result, then the next day try to repeat them. Over time, the process takes shape, evolves and improves.
During the 100 days a lot of people asked us how we managed to make 1 per day so we'll be taking a general overview of the character creation process. Most of the characters follow this scheme.

   - Conception

With the help if ZBrush and using Dynamesh we're able to quickly generate a "Shape Sketch" of our model without worrying too much about the final polycount.
The result of this doodling will serve as a good base to generate the final topology on top of it.
Face of a character
3d character model from different positions
I personally like using Zbrush Retopo with Zpheres, but you can use Maya, Blender, Topogun, or any other 3D software for doing retopo.
If the shape is easy , let's say a Square or a Circle, I just export it as a reference and I would make it in MayaLT.

   - Fixing retopology with MayaLT

I found over time that doing retopo with Zpheres can generate issues, so instead of trying to fix those with Zbrush, I just keep the mistake and fix it later on with a 3D modelling tool such as MayaLT.
Most of the work at this point is trying to reduce and remove triangles, and fixing animation loops
Even if you did the retopo on another software, I recommend to rotate the model around and try to find any issues on the model before starting the UVs.
Picture

   - UVs and Textures

To generate the UVs of a character, I like to make a planar projection of the model, then start making UV cuts on the most ideal zones.
Here you can see all cuts we made for the characters, almost every model follows somehow this pattern.
Once you have all the cuts done, you have to unfold the UVs
Picture
To create the texture, we like to use ​Adobe Color in the studio. Easily help us find color schemes that work for out characters.

   - Rigging and Animation

In order to save time we used Mixamo both for rigging and animation.
​
Mixamo is an online free tool that allows you to automatically generate humanoid riggings and it also provides you with a gallery of pre-fix animations that you can use on your project.

This tool is being used a lot when it comes to quick prototyping, so it was a no-brainer to use it for a challenge like this one.
Gif of our 3D character dancing

   - Unity scene set up

Finally we got to the point where I wanted invest more time at. Unity. The hole point of this madness was to force myself to use Unity as a quick tool to develop new visual concepts and ideas for future projects.
Not gonna dig into every detail of what I did with Unity, but there are a couple of tools that helped me to save time and get great results during the 100 days.
  • Toony Colors Pro 2 by JMO
I just love using it, making a cool looking material with this shader is extremely easy.
What I usually tweak with materials are colors and shadows and I also like to play a bit with outlines.
You can find a link to this asset here.
  • Post Processing Stack by Unity Technologies
This asset allows you to quickly add different post processing effects to your game scene.
​You can find a link to this asset here.
Trying materials and shaders with our alien

   - Final steps

Once you have a cool looking scene is time to add music. Sometimes when looking for music I found myself changing the animations to fit with the sound.
You can find tons of royalties free music online.

Once you have a music, we used Camtasia to record the Unity scene and edit the final video.
Now that everything is done, is time to post it on Instagram and Twitter.
Hot Dog avatar for vrchat dancing

   - BONUS: extra tips to iterate faster on a challenge like this one

There were some days that I used time tricks to cut time on different parts of the process so I could invest more time on others.
Keeping body parts, cloning and reusing them on different models, reduces a lot the iteration time.
On other type of models like the food and tools ones I duplicated the face, legs and arms so I had just to model the body part.
Picture

Uploading them to VRChat as avatars

Halfway through the challenge we came up with the idea of giving a second life to all the characters by transforming them into avatars for the Metaverse. They were all already rigged with Mixamo so we knew by experience that they could be used, at least in VRChat.
Months later I decided to give it a go to the avatar idea with the help of 2 interns in the studio.
VRchat promo image with Polygonal Mind's avatars
Initially we just wanted to give them simple rig and upload them into VRChat, but after a few days into the work, we got reach out by the VRChat team, they loved the variety of our characters, and they suggested us to give them some extra love, by adding visemes and optimising them for the new Oculus Quest release, this way they could be used by even more players.
So we improved them and created a VRChat World to gather them. I must admit that Investing some more time into adding visemes made the characters way more interesting and fun to use!
Here is a screenshot of our Notion.so board in the middle of the project.
Picture
There is a lot of documentation already about how to upload avatars into VRChat, so we wont be covering any of it on this post, but, we'll be releasing another blogpost with some tips for optimising avatars for Quest using MayaLT later on.

Next Steps for this project

Picture
As you can see in our roadmap for now our most closest goal is to keep uploading all the characters into VRChat with visemes, we're really close to have them all up and ready to use. At the same time we'll be improving the world too.
After this our next milestone is to tokenize this avatars using the blockchain, our final goal is to release all model files for free to download on our site, as "Open source avatars" so anyone can use our avatars in any virtual world platform or project they're developing.
During the time I was writing this post, the guys at LIV reached out to us to use the avatars for their platform, so you'll be able to use them for streaming Beat Saber soon.
If you have a VR platform or a project you might wanna use our characters at, feel free to reach out so you can have a test before we make the open source release.


Conclusion

What started to be just a fun challenge to explore ideas, has become a larger project that is gaining interest in the VR community. Having 100 characters inside a hard drive felt like I wasted a lot of time just for a few Instagram awareness, but repurposing them for VR has been one of the most interesting things we've ever done in the studio. Walking around VRChat and seeing people having fun with your work, is just amazing.
Frog takes a selfie into vrchat with our Hot Dog
If you want to see the characters during the challenge, you can check our Instagram and ​Twitter accounts.
Picture
Picture

Post by:

Daniel @toxsam

Daniel García (aka Toxsam) is the founder and creative director at Polygonal Mind.
In love with videogames since he can remember, passionate about geometry, VR addict and energetic persona.


Comments

Setting up LipSync asset for a RPG mobile game

2/14/2019

Comments

 
4 heads with different mouth positions

The Mission

Animating every dialogue sequence for an indie game can be a very expensive and tedious process, so we decided to approach this problem with another perspective.
​
In this case study I want to talk about how I implemented a systemic lip sync in Ravensword Legacy for Crescent Moon Games in Unity.

Resources

  • MayaLT 2018
  • Unity 2018.2
  • Lip Sync Pro from the Unity Asset Store

This post is about a mobile premium game we are working along with Crescent Moon Games, called Ravensword Legacy, which is currently on development.
This week I had to work revamping the awesome characters that were already made by another team member in order to allow them to talk. After some research, I found a plugin for Unity called LipSync Pro that allows you to add some keyframes for Audio Clips, so that the character that is talking moves their mouth accordingly (it allows for some other blendshapes, like blinking or yawning, and even some presetted expressions like angry and happy among others, so you can assign each expression to each line of the character).

The core of spoken languages

Picture
A phoneme is one of the minimum units of sound that distinguish one word from another in a particular language. For example, in English there are 44 phonemes. ​Similarly to VRChat's system, LipSync uses phonemes to choose between the different mouth shapes that represent a specific sound. This way, we can assign each keyframe from the Audio Clip to one phoneme, and the mouth will adapt.
For this kind of work, game developers usually group the phonemes together, for example the letter "k" in "key" sounds the same as the "c" in "car", hence needing only one phoneme for that sound. Same with "m", "b", and "p" and so on.
Picture
This is the simplified list that LipSync asks for us to work its magic. You don't need to fill all of them at all, in fact we're just using 3 blendshapes (A/I, E, O)+ the resting one.

Adapting to the new needs

I proceeded to modify the models and open their mouths, add the inside of the mouth (commonly called mouthbag), tongue and teeth. I also had to modify the textures so the teeth, tongue and mouthbag were textured.
Head model with tongue and tooths
After this, I duplicated and modified three times the resting pose for the A, E/I and O phonemes. As the game is low poly and has pixel post processing and a limited colour palette (sometimes even as low as 8 bits!), too much fidelity and/or fluidity would make it look uncanny. 
4 heads with different mouth positions
Each of these heads were exported as a single head with 4 blendshapes, using the modified mouth's ones as targets for the said blendshapes.
Hero 3d model complete assembly
Then I repeated this process for a couple of NPCs and the other 8 head variations of our hero. Once that was done, I headed towards Unity and imported the new heads replacing the old ones. I also imported one character line from one of my favourite videogames for testing purposes.

Setup of the system

I created a LipSync info .asset file from that Audio Clip via LipSync's Clip Editor (shortcut Ctrl + Alt + A) and started adding the phonemes that matched with what the line was saying. Having only 3 phonemes really sped up this process, otherwise it'd have been too tedious. After that was done, I saved the LipSync info .asset file in the same folder as my Audio Clip.
LipSync program
Each of these black markers means that the mouth will change to the specified phoneme at the specified time. Once this was done, I went back to the prefab of the character head, added the LipSync script and assigned the head mesh as the main mesh, and the teeth as the secondary mesh. This means that the head blendshapes will drive the teeth ones too. I also assigned the Audio Output of this character to be the origin of the sound of the line, and dropped it into the slot.
Picture
Picture
I then specified which blendshapes were to be assigned to which phonemes so that LipSync knew what blendshape it had to change everytime the timeslider passed through a phoneme marker. 
Picture
I then modified the rest of the settings a little bit more towards what I was looking for, rapid opening and closing of the mouth after each phoneme, and Loop while I setup everything up and changed the settings.

Conclusion

And so this is the end result! It was a very fun experiment and I'll probably end up using this method again in the future for personal projects. 
Please be aware the audio clip was a test one to make sure the plugin worked and it's not intended to be used in the final product, since it's a dubbed line from another game.
If this was helpful to you in any way please consider sharing it with your gamedev friends, we really appreacite your support!

Post by:

Alex @Vontadeh​

Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's  in-house team.
Passionate about videogames, vivid tutorial drinker and cat lover.
​​
Comments

Learning Unity's Cinemachine

10/21/2018

Comments

 
This post is about a mobile premium game we are developing in-house, called Ma'kiwis.
Long story short, Ma'kiwis is a adventure game for mobile devices where you play as a shaman leading mini tribe people to safety.
This week sprint was about adding few items to the game and make the firsts cutscenes into the game, so we could start testing the game workflow with them inside the game. I was assigned to work on the cutscenes that gives the player an introduction to the game's plot and gameplay, basically the tutorial.
Picture
Story boards of the cut-scenes in Level 1

Maya animations + Unity's Animation System was too messy

After trying for a few days I felt the system we were using previously was a bit limiting and it wasn't letting me do basic things like blend the cameras or time events like camera shakes or starting and stopping Particle Systems. We even had to animate the character interaction together as a single GameObject using Unity's Animation system.

The thing I disliked the most of this previous system was the inability to blend between pre-fixed cameras.

​This meant that we couldn't go back to the Main Gameplay Camera after a cutscene which resulted in cuts every single time; either that or fades to black. This felt too repetitive in my opinion since there are a lot of other camera cuts when simple events occur ingame, like activating a switch or picking up a collectible. I wanted something a bit more dynamic that attracted the player's attention, hence the blending between cameras was really needed. 
After talking with the rest of the team, we decided to upgrade the current Unity version we were using e (from 5.6.4f1 to 2018.2.0f2) so we could use the Timeline (Timeline was included in Unity 2017.1) + Cinemachine.
Example of the Popping problem when using Animator

Using Cinemachine

Cinemachine is a free Asset developed by Unity that brings a lot more of freedom and a more cinematic look to the Unity camera system, allowing you to control the Field of View, Depth of Field, Camera Collision and the so needed blending between cameras, among other great features. 

​It took me a couple of days to understand and to get used to but it finally allowed us to have timed flashes, explosions, sound effects, animations, scripted damage and, of course, blending seamlessly between Cutscene cameras and Gameplay cameras, all together in a single Timeline.
Picture

Cinemachine + Timeline is a very powerful combination! ​


​This system was achieved by having a separate GameObject that has the same following script that the Main Camera does, so it's always on the place the gameplay camera should be (even during cutscenes). ​
Picture
This is done because we actually blend between the Main Camera to the position of the Cutscene Camera, which caused a stutter because of the Following script wanting to go back to the gameplay position. Basically there were two parts telling the camera what to do: the Following script was telling it to stay aiming for the player and the Gameplay one (which we were blending to) was telling it to follow the spline/bezier until the position was the Cut-scene cam position.
This way we always have the position of where the camera should be during gameplay to blend back after a Cutscene.
Another possible solution could be to have a Master Camera and using the Gameplay Camera as a Cutscene one, so the Master could blend between them without stutter, but that would've meant that the whole camera system would have to be changed and we couldn't afford that.

Hope my struggle with the cutscenes can help someone. :)

Post by:

Alex @vontadeh

Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's  in-house team.
Passionate about videogames, vivid tutorial drinker and cat lover.
​

Comments

    Archives

    October 2019
    September 2019
    August 2019
    June 2019
    May 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2016

    Categories

    All
    Decentraland
    Maya
    Mixamo
    Morphite
    Substance Painter
    Totally Reliable Delivery Service
    Unity 3D
    VRChat

    RSS Feed

Home
Projects
Assets

Picture
Crypto
Friendly
Picture

Subscribe to get some 💚 in your inbox once in a while.

Follow us and your visit will never be forgotten!
Picture
Picture
Picture

 © 2015-2019 POLYGONAL MIND LTD. ALL RIGHTS RESERVED.
  • Home
  • SERVICES
  • Projects
  • Blog
  • Assets