Polygonal Mind
  • Home
  • Metaverse Builder
  • CryptoAvatars
  • MegaCube
  • Blog
  • Decentraland
  • Projects
  • Assets

Blog

Export a GLTF/GLB with several animations

4/29/2021

2 Comments

 
The Mission
As we advance with new projects in Decentraland. We learn new tecniques and new softwares.

One of the ones we have been use the most lately is Blender, since you can export GLTF and GLB formats really well. We have talked about animating in Blender in other posts.
​
So here is a mini guide to export a GLTF model with several animations attached. So you can alternate between them in a GLTF viewer of your choice.
Resources
  • Blender 2.9
  • GLTF/GLB viewer
Export a GLTF/GLB with several animations
Stash animations

So the first thing you need to know to create different animations in Blender, is what are the actions? How to create one? How they work? and How do you delete them?

An Action in Blnder is a tool to record and contain the data. As everything else in Blender, Actions are data-blocks. So when you animate an object by changing its location with keyframes, the animation is saved to the Action. That way you can create as many animations as actions you want.
​
Each Animation could be a different action on the character or object. For example run, iddle, shoot....
To create a new action, you will have to change one of the windows of Blender or create a new one. To do that, move your cursor into one of the corners of the window, until the cursor changes the icon into a cross, then click and drag to create a window.
position to create a new window
Cursor and position to create a new window
Then to view the action menu, change the content of the page to the Dope Sheet and then you can change it to the Action Editor.
Dope Sheet Action Editor
Changing the window to dope sheet
Dope Sheet Action Editor
Menu to change to the Action Editor
When you have the editor in view, you can start creating actions, but before you do that, you must have a couple of things in mind:
​
  • Actions only can be vinculed to ONE object, if your model is divided in two pieces you can't assign the same action to them.
  • In order to avoid the above point, the actions shall be vinculed to the Armature. In Blender an armature is similar to and empty group, which is the parent of the model and the bones.
  • As you can imagine because of the term, armature is related to bones, and that is correct. When you create an Armature, it automatically creates a bone and they are vinculed. In that way when you animate in the Armature the bones are afected too
  • And finally, Yes, that means that the models shall be animated with bones.

Once you have created and armature (Top corner Add→Armature), you are ready to create actions. With the Armature selected, press the New button, change the name of the animation and don't forget to press the shield (If you do not press the shield, Blender will not save the new name, and you will lose the changes you do with the action).
armature new action
With the armature Selected create a new action
When you finish with the action, close it by pressing the X button at the right, so you access to the first menu on the editor and create a new one from scratch. An Important note if you want to delete the action because you don't like it, don't need it or if you duplicate it by mistake, DON'T press the X button alone (as stated before this closes the action but it doesn't delete it) and DON'T press the delete key nor the backspace one this won´t delete it either.
​
The ONLY way to delete it is to press Shift + the X button on the right, then close and reopen the file to refresh the motor, if you do not close the program you will still see the actions that you deleted.
save the name and the action
Check the shield to save the name and the action 
save the name and the action
Check the shield to save the name and the action 
Once you have finished with whichever actions you want to make, you have to stash them on the model, to make them selectable in the gltf model. To do that presh the stash button in the left side of the action editor, make sure you dont want to edit anymore before stashing it.
save the name and the action
Menu with different actions
From Maya To Blender

Now let's say, that you want to work in another software like Maya, This time the change is easy, but you have to follow a series of steps.
  1. Do the animations with bones, when you sport them into blender will create an Armature
  2. Make sure all the animations are in the same time line, you can separate the frames to divide the animations, but make them on the same time line.
  3. Export in FBX with the bones and check in the options Bake Animation
​
Once you import to blender; the fbx will create an armature in the scene with the animations in it. Then all you have to do is :
  1. rename the action.
  2. Duplicate the action as many times as animations you did
  3. Delete the frames that you don't need and move the ones you need to the start of the timeline
  4. once you do this stash the animations and export the model
Exporting and Viewing

When you export the model, check a few options in the presets:
  • Limit to selected Objects, if you have several objects on the scene make sure you select everything inside the armature you want to export
  • Apply modifiers to apply the deformation of the Bones to the models
  • In the animation → skinning → include all bones influences to make sure all the bones are exported
​
Once you export it, you can see the model and check is correct in a GLTF/GLB viewer there are several online that you can see, when you import the model make sure that you have an a tab where you can check whic animation you want to see.
Exporting and Viewing
GLTF/GLB viewer
Web with the different animations to select

Picture
Laura Usón
​​​3D ARTIST
​Passionate about videogames, movies and creatures. artist by day and superhero at night.
cent
2 Comments

How to use your CryptoAvatars in Zoom meetings

4/28/2021

0 Comments

 
The Mission
Bought an avatar on CryptoAvatars? Do you want to use one from our 100Avatars project and show it to everyone else? This step by step guide will tell you how!
Resources
  • Zoom
  • VMagicMirror
  • Unity 2019.4 LTS (not necessary)
Picture
Getting your VRM ready

If you bought your avatar from CryptoAvatars.io or you already have a VRM file, congrats! You don't have to do anything here, you can go to next section.

This part of the guide is only for people who want to use one of our +100 free avatars or any fbx model for their meetings.

Get your FBX ready, because you are going to turn it into a VRM file. For this part, you will need Unity 2019.4LTS and the last avaible version of the VRM plugin for Unity, which you can download right here: https://github.com/vrm-c/UniVRM/releases
​

Once you have both, just create a new Unity Project and drag and drop the UnityPackage containing the VRM plugin.
Avatar VRM plugin
Get you FBX and texture into the project too, and create a new material. Make sure everything is correct. Things that you should be on the look out are:

  • Material is using the VRM/MToon shader
Material VRM/MToon shader
  • Avatar has the right size. Create a 3D cube and put it right next to your model. Cube size is 1x1x1 meter, so use that as reference. Change it via the scale factor parameter, not scaling.
Avatar has the right size
  • It uses a humanoid rig. You can change it right here.
humanoid rig
  • Legacy Blend Shapes is enabled. You can turn it on here.
Legacy Blend Shapes
Now everything is ready, is time to export.
​
Select your avatar and go to the top left of your screen, to VRM → UniVRM 0.58 (or whatever version you are using) → export humanoid.

VRM file
Set the language to english, if you haven't already, and add a title, author and version of the avatar. 
VRM file
Now click export and save it wherever you want.
​
Nice! You now have a VRM file that you can use on the next steps.
Basic VMagicMirror settings

VMagicMirror settings
You should have your VRM file ready to go. First things first, download the program that's used for making your avatar appear and talk, that is, VMagicMirror. I'll leave the link here:

https://booth.pm/ja/items/1272298
​

Download the program, extract it and run it. It should be all ready.
Once you start it up you will see a green screen and another window in Japanese. We will set the language to English, of course.
lenguage settings
Now is time to load in our VRM file. Use the "Load File on PC" button to find your VRM file.

​The avatar should appear out of frame. Right below the load button there is and Adjust Size by VRM. Click it and should make everything a bit more on focus.
Adjust Size by VRM
It is very likely that the avatar arms might look broken, we are going to change that now.

Go into the settings menu and into the Motion subsection and you will see Arm menu, and there are 2 options you want to look "Waist width" and "strength to keep upper arm to body [%]"

You can tinker with those 2 options until you get a desireable result.
Avatar skeleton
As a last resource, if for some reason the arms are still kind of bad, you can disable all the arms motion by clicking on the first Motion option called "Always Hands-down mode" and tinker again with the previous settings.

​Every avatar skeleton is different so they always need some personalized changes and sometimes the program doesn't recognize all the bones or how they should behave.
More VMagicMirror settings

The basic stuff is ready to go, but, if you want to change anything else, these other settings might help you tune what you want.

Changing the position of the camera

Open the settings window and find the "Layout" menu. Here you'll find all the Camera settings you need.
​
By checking "Free Camera Mode" you'll be able to move the camera all you want inside the View Window, where your Avatar is. Clicking with the middle mouse button allows you to move the camera and the right click button rotates the camera around. Using the scroll wheel up and down you can zoom in and out. You can also change the Field Of View (FOV) just below the "Free Camera Mode".

Position camera
The last options in the menu allows you to have different presets for different camera positions. Just move the camera to where you want to be and click on any number in the Quick Save row. Once you do that, the numbers on the Quick Load will light up, and now you can just change between you cameras easily.
Changing the position of the devices around your avatar​
​

On the same "Layout" menu, you will see the "Devices" submenu, and just check "Free Layout".
Keyboard action position
Once that's done return back to the screen where your avatar is, and you will find gizmos in each of the devices your avatar interacts with. 
Keyboard action
Use the gizmos to reposition the devices to your liking. On the top left you can change if you want to change the position, the rotation and the scale of the devices, as well as changing if you want the gizmos to use the device as reference of coordinates by selecting "Local" or use global coordinates with "World".

Finally, if you want to go to the Default camera, just click the button on the bottom and the camera should reset itself inmediately.

Turning off all devices

Now, if you don't want to have any device and just want your avatar to stand up and talk, you can disable all the devices on the same Layout menu. Just scroll down until you see the Device Visibility and there you can see different options you can turn on and off. I would suggest that if you turn off all the devices, you also select the "Always-Hands-Down Mode" in the Motion Menu.
Turning off all devices
There's also a little option you can play with and that's "Typing Effect" and that's the effect that will play everytime you type something with the keyboard. You can select "None" if you don't want anything.
Changing the background

You can change the background color if you dont wan't the bright green color of the chroma by going into the "Window" menu and using the Background Edit color option. Now is up to you how you want the background to look.
Background chroma
You can also make the background completely transparent by going into the "Basic settings" just above the Background submenu and toggling the "Transparent Window" option.
Background chroma
Setting up some animations
Wouldn't it be nice if you could make your Avatar wave at the camera by just clicking one button?
You can, and it's fairly easy to do.

You need to go to the last option on the settings menu, just below "Devices". There you will see a lot of different actions. Each of them is a bunch of different actions, differinciated by 2 icons, Facials expressions (The ones with a head highlighted) and body motion (Ones with the body highlighted)
​
By default, the way to activate any of the actions is to write the name of the action in your keyboard, so for example, if you wanted the "Joy" emote to play, you just type "joy" on your keyboard and it should work.

​You can set different ways to trigger the different animations and expressions. You can use a gamepad, a MIDI Controller or you can use the different numbers on your keyboard to trigger them.
Picture
Animations actions
Saving and loading configurations

Do you have your place set up and want to save it for other time? You can! Just head back to VMagicMirror Menu, not the settings, the menu where you can load your avatar, and on the bottom right, you will see different options under the name "Setting Management".

There you can Save, Load and Reset to default if you want.

Saving creates a .VMM file with all the information inside about how everything is set up. Once you have that file saved on a folder, you only need to click on load and everything will set up correctly.

VMagicMirror Menu file
Zoom time

Now is time to set up Zoom. This one is easy and fast. Since VMagicMirror is a program, you must use the "Share screen" option instead of a webcam. Find the window your avatar is in, and just click share.
Setting screen Zoom
Be mindful, if it ever says that the screen sharing has stopped, because you minimized the screen or any other reason, you can always restart it by clicking Resume Screen Sharing on the menu toolbar that appears on the top or botton of your screen. This will make that your VMagicMirror screen is always sharing to other people even when you interact with other programs.
​

​Other Options: 3teneFREE

Is a free, VRM compatible with similar characteristics to VMagicMirror. One of the differences is being able to highly customize the background, with the possibility of even adding other 3D models to the scene.

3teneFREE VRM
Trouble-Shooting

Unity blend shape
Mouth not moving?

It is posible that your microphone isn't being recognized by the program. To fix this, go to Settings, and then to Face. On basic settings, you should see the first option that is "LipSync". Make sure that's enabled, and to the right of that use the drop down menu to find your microphone.
​
If it's not there, restarting the program should fix it.
If that still doesn't work, it's possible that the blendshapes are not correctly set up for the VMagicMirror.

We are going to fix that, and we are going to need Unity 2019.4 LTS and the lastest VRM plugin for Unity which you can download it here.

https://github.com/vrm-c/UniVRM/releases

Just download the UnityPackage file and drag and drop into Unity to install it. Now do the same with your VRM. Drag and drop it into Unity.

When it finishes loading all the file, it will appear a bunch of folders, referencing the VRM.
​
One of these folders is called "Blendshapes". Enter it, and it will appear a bunch of different names. For the sake of simplicity, the only ones that we will take a look are:
Unity blend shape
Unity blend shape
  • A
  • E
  • I
  • O
  • U
Double click any of them, and on the right side, on the Inspector tab, you can see a bunch of different options. Just go to the last one al click on the little triangle to unfold more blendshapes. Now you only need to find the correct one and match it with the vowel you want.
​

For example, if you are on A, you need to find "vrc_v_aa" since thats the one that matches.
Unity blend shape
Same for every other:
  • aa for A
  • ee for E
  • ih for I
  • oh for O
  • ou for U
Once that's done, just export the avatar again.
​
Drop it into the scene, click on him, go to VRM → UniVRM 0.58 (or whatever version you are using) → export humanoid and just click export, since it already has all the information inside.
export humanoid VRM
My arm is not bending correctly

Unfortunately, VMagicMirror is still fairly early in development and this kind of things can happen. Just the way some of the IK are set up makes it impossible from our end to fix these kind of problems.

If changing the arm parameters like I said before doesn't make the result better, you might want to try other program for a more permanent solution.
Pedro Solans 3D
Pedro Solans
3D ANIMATOR
​​Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
Twitter
0 Comments

Async.Art & Polygonal Mind - The cube

4/21/2021

0 Comments

 
The Mission
With the event of The Megacube, created by Polygonal mind, a lot of prices were given and a lot of crypto partners have the opportunityof promote themselves.

One of this partners was Async.Art. A place where you can obtain and create Programmable Art.
​
From this collaboration the first 3d Piece of programmable art was created. The Cube.
Resources
  • Maya
  • Textures
Async.Art & Polygonal Mind The cube
What is Async.Art?

As I said before, Asyn.Art allows you to create and collect programmable, but what does it mean programmable?
​
I think a better way to express it will be to say that it changes depending on the owner.
​
How does it do that?
Well usually, when you create a piece of art, you create an unique piece and upload it to the site. there you can proceed to sell it or keep it. However what Async tokenizes are two things:
​
  • The Master: which is the final piece of art.
  • The layers: which compose the piece of art

Think about it like photoshop, you have your file that has your work, and you divided it in different layers to work more easily, This not only applies to photoshop but also to video edition software or even 3D.
Async.Art cube sketch
2D diagram of how Async.art works, the frame also acts as a layer
This allows that the same piece of art can be owned by several people, one with the master and others with the layers.

But how it changes? This changes come with the layers. More specfically with a State changes.
State changes are, as the name indicates, different states of the same layer. This states can change at any given time, which in turn will change the look of the Master.

This can be changed by layer owner. Or even automatically, by different events such as time, cycles of day or night etc...
​
Making the art become alive.
Cube first steps:

How do we translated all of this to a 3d Model? When I first started working on this, I thought there were diffrerent ways to work on this.

But one thing that I had clear was that my master layer have to be the UV map of my 3D model. The question was how to divide the UV map and create the different layers.

As I saw it we could work this in the more basic maner making each face a different image, and make one image for an UV.

Or play with the different UV's to change the looks on the image of the cube.
Async.Art cube sketch
So on the first steps on this project I tried a simply approach and create an uv map that allowed to test how can we divide the cube, in order to see how can we divide the different layers.

At the same time we decided on the shape and style of the cube, we finally decided on the cube, because we couldn’t forget that this was thanks to the Megacube.

With a similar style that is called Outrun style or retrowave, chracterised by its Dark backgrounds and neon lights decorating the scene.
​
Once we tested the layers, we decided on divide them in three layers which 3 different states each.
Async.Art cube 3d
First design of the cube
Async.Art cube 3d uv map
Example of uv map cube
Style & Final result

So once we have the different states and layers, came the moment to create the model. For that purpose, and in order to play with the deep that allows the 3D.
​
To do that, we played with the transparency, a moved the faces , to put different levels in the same space. That way even thought the external shape is a cube, we can add more detail inside.
Async.Art cube 3d
Final Cube in 3d with master
Once we did that, it was time to prepare the layers and divide them, so that the owner could change not only one thing but several along the different faces. 
Async.Art polyginal mind cube assets
Finally we decided to divided it on the background, the different faces and the frame. if you want to test how the texture changes, you can test it here:

https://async-explorer.herokuapp.com/test/canvasID=601450ca99a999001248f783

​And here is some of the possibilities in 3D.
Async.Art cube 3d animated
Cube with different layers
Conclusion

This was a fun prize to make and make discover this new way to create art. I had heard about it before, but it was when I tested it that I discover its possibilities.
​
If you want to see this cube you can go to decentraland and see how it changes.
Picture
​Laura Usón
​​3D ARTIST
​Passionate about videogames, movies and creatures. artist by day and superhero at night.
cent
0 Comments

Doing a Video Stream in Decentraland

4/15/2021

1 Comment

 
Doing video stream in Decentraland
Premise 
"Create, explore and trade in the first-ever virtual world owned by its users." With this welcoming phrase Decentraland invite us to dive into one of the first blockchain-backed platform that enhances an alternate reality to express our ideas and projects into the world.
​
Launched in February 2020, Decentraland has seen its potential grow as a place to display and showcase cryptoart. Nowadays you can find NFTs placed all over the Land, some of them inside buildings specifically made to gather them, others in parks and open areas and even some of them can be found flying around.
Superrare NFTs placed Land
A caption of different points where NFT art can be seen, near coordinates 15,44 in Decentraland

The mission
This guide focuses into the steps to follow if you want to display a video NFT in Decentraland with the current version of the SDK (6.4.9 at the time of writing). We will see what it can and what it cannot do Decentraland and the chunks of code needed.
Resources
  • Unity Editor (2018.3.6f1)
    unityhub://2018.3.6f1/a220877bc173
  • Decentraland SDK
    https://docs.decentraland.org/development-guide/SDK-101/
  • Unity Decentraland Plug-in
    https://github.com/fairwood/DecentralandUnityPlugin
  • Decentraland Picture Frame Display
    https://docs.decentraland.org/development-guide/display-a-certified-nft/
  • Decentraland Video Stream
    https://docs.decentraland.org/development-guide/video-playing/
Listing your NFTs, knowing your limitations

The most important thing to do at first is to create a list of your desired NFTs to be in place. Decentraland doesn't have a hard limit regarding NFT placing or video streaming as they count in the Entity limitation.
​
If you don't know your limitations, you can find a lovely spredsheet here: https://docs.google.com/spreadsheets/d/1BTm0C20PqdQDAN7vOQ6FpnkVncPecJt-EwTSNHzrsmg/edit#gid=0

This is the list of assets we are going to display in:
  • https://app.rarible.com/token/0xb932a70a57673d89f4acffbe830e8ed7f75fb9e0:16973:0xfec33a90737254dcf9aa33188a11f32797197f93 by ToxSam (https://app.rarible.com/toxsam/created)
Address: 0xb932a70a57673d89f4acffbe830e8ed7f75fb9e0
Token: 16973
  • https://app.rarible.com/token/0xd07dc4262bcdbf85190c01c996b4c06a461d2430:92976:0xfec33a90737254dcf9aa33188a11f32797197f93 by Bananakin (https://app.rarible.com/bananakin/created)
Address: 0xd07dc4262bcdbf85190c01c996b4c06a461d2430
Token: 92976
  • https://app.rarible.com/token/0x60f80121c31a0d46b5279700f9df786054aa5ee5:59183:0xb7ee502c4dadcc9a62934b02c5f8decbbfa32c48 by kourtin (https://app.rarible.com/kourtin/created)​
Address: 0x60f80121c31a0d46b5279700f9df786054aa5ee5
Token: 59183
  • https://superrare.co/artwork-v2/painting-study-01-8552 by Lauretta (https://superrare.co/lauretta)​
Address: 0xb932a70a57673d89f4acffbe830e8ed7f75fb9e0
Token: 8552

For our example we will use a splendid land of 1x1. Here we will build our environment and decide where to place our finest cryptoart pieces. For this small piece of metaverse the platform allows us to have a maximum of 10.000 triangles, 200 entities, 30 bodies, 20 materials and 10 textures in our exported project zip folder.
where to place cryptoart pieces
Caption of the ECS limitations spreadsheet
To have a better understanding of limitations, we have to take into account that every "entity" (called in Unity gameObject) that has it's own independent Transformation node (Position, Rotation and Scale) is considered an Entity. Even if it doesn't have a 3D mesh (Body) attached to it.

This means that every NFT, video stream or model will count at least as one entity each.
​

So for now we know that our NFTs will "size" 4 Entities within the limitations and we know that although they are all animated, only one is a video source and it will be the one to do a video stream. The other three artworks are in GIF format, supported by the Picture Frame by Decentraland.
Extracting the video source

There is one simple rule in Decentraland to do a display of NFT picture frame and it's that: if it is in OpenSea, it can be shown. This is due to the API used to extract the blockchain data to display the artwork that requires the Entity to include a NFTShape stating the Smart Contract Address and the Token ID of the artwork. This flexible set up allows you to incorpore to your land NFT assets from SuperRare, Rarible, KnownOrigin, Whale, MakersPlace and many more!

For all the artworks that remain apart of the current admitted formats (Image file formats), we have to do a video stream of them but we must know that even video streams have limitations.

The formats currently supported by the Decentraland API are .mp4 , .mov , .ogg , .webm.

Note: The inclusion of the .ogg format makes possible to only stream audio indeed :)
​
To extract the video source of an artwork it's just as simple to catch the static video source some platforms give you by right clicking on the artwork
Video artwork Bananakin
Original artwork by: Bananakin
Source link: https://storage.opensea.io/files/28b5a343586b597f755148a85d8edd23.mp4
​

With this source our artwork can be streamed.
Deciding the placing

For this test we have developed a small environment scenario where we place a "dummy" game object that will indicate the complete transform data we need, position of the video stream, its rotation and its very own scale. We have named this personal beacon "COG VideoDisplay (1)", this code name will make it easier to find it in the game.ts code the export processes for us.
where to place cryptoart pieces
Our view with Unity, you can already spot the place where we will place a video stream
place cryptoart pieces code
​The view of the same place for the video stream in the game.ts code
As you may have noticed among the code, rotations are not set in Euler angles. Instead unity runs angles in Quaternions to avoid the gimbal to happen. If you want to input Euler angles remember to put .eulerAngles after rotation to indicate that your values are being written differently.
place cryptoart pieces code
More about Euler vs Quaternions
​https://stackoverflow.com/questions/6002516/quaternions-vs-euler-angles
Placing the chunk of code

As the official Decentraland documentation follows, the code to do a stream goes by stating the following lines:
code to do a stream
CODE
Code

    
This chunk of code would create a video in the position 8,1,8 located in the land with a size of 1 meters square.
​
To adapt it we extract the code needed to our scene, specifically we need the part where a plane is spawned in the world position.
code to do a stream
CODE
Code

    
This is the vanilla code, stating that "entity77588n" is called "COG VideoDisplay (1)". This is the Entity we need to work on to make it stream a video on it. For it we add the following lines:
code to do a stream
CODE
Code

    
Instead of spawning a new entity, we have decided to set a material on the current one and tell it to have an specific material (that contains the VideoTexture) and describe the behaviour it has to have when the player interacts with it.
Local deploying and debugging

After setting our code we can deploy it locally and see if Decentraland runs our chunk of code, luckily for us this works and so the video can be seen in motion alongside the other NFTs.
​
The main counterpart to stream a video it's the fact that it is not an NFT in essence, somewhat we could say that breaks the spirit of the blockchain but it's the only way possible to do this at the moment.
Decentraland runs our code
Decentraland runs our code
Another point against it's use is the memory usage and the overload that may cause to play raw videos or stream to much data into Decentraland. Because it is a platform that it is already streaming a lot of information, overloading it with additional videos and images can be problematic if you look to have a smooth experience.
Additional features to your stream

You can also set different properties to the stream that by default are not enabled. Or you may want to start the video in a specific position or stream it slower. This is the complete list of things you can add to you custom stream code:
​
  • videoTextureName.Play()
Simple performer to play the stream, useful to trigger a video by custom events
  • videoTextureName.Pause()
Pause the video
  • videoTextureName.Reset()
Takes the stream to the start point
  • videoTextureName.Loop
(true or false), set a boolean to indicate if your stream continously loops or stops at the end of the video stream
  • videoTextureName.PlaybackRate
The speed of streaming your video has, 1 is the default speed.
  • videoTextureName.Volume
The audio volume of the video, 1 by default.
  • videoTextureName.Seek
Allows you to change the starting point of the video once it's played or reset. -1 by default.
place cryptoart pieces code
Conclusion

Streaming a video source in Decentraland is, among other things to develop to your land, simpler than it can be thought at first glance. But be careful when placing multiple streaming videos without stop as they can overload your scene (and your neighbouring lands too!)
Picture
​Kourtin
​ENVIRONMENT ARTIST
​I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
TWITTER
    1 Comment

    Add Dynamic Bones to your character in Unity

    4/7/2021

    0 Comments

     
    Overview
    Adding bones that react to your movement can make a drastic difference on your character.
    From hair, to a tail, or a skirt, making stuff move makes everything cooler and more interesting.
    We added googly eyes to one of our VRChat avatars to make it more fun.
    ​
    Because we can, and you too.
    Resources
    • Unity
    • MayaLT 2019 or your preferred 3D software
    • Dynamic Bones (Unity Asset)
    Add dynamic bones to your character in Unity
    Adding the bones

    Of course, the eyes won't move by themselves, they need a bone that will make them bounce. I'm sure you followed our rigging tutorial to easily rig your character with mixamo and fix any nasty problem, if not, be sure to check it out here:

    Fix and reset your Mixamo avatar rig
    We start with a rigged character. Meet the toast.
    ​

    We didn't make any changes aside from fixing the skin so it doesn't break everytime we move a bone.

    ​You have your character alright? Good, now we are adding the new bones.
    rigged character in Maya LT
    Using the Create Joints tool, add your bones wherever you want. Make the bone chains as long as you want so it looks as smooth as posible.
    skeleton menu joints
    eyes rigged character
    Since the eyes don't need any chain at all, we basically created the eye bones starting from the head.
    ​
    Be sure to skin the new bones properly. Personally, we had to remove the pupils from the eyes and skin them separately.
    Want to give your character even more personality? Use blend shapes visemes to add facial expresions while talking. You can easily follow our guide here

    Create and upload a VRChat Avatar with blend shapes visemes

    ​Now, export your character making sure you have the skin weight correct and the skin checker box ticked.


    Time to bounce

    Next stop, Unity.
    ​
    Be sure to have the Dynamic Bones asset installed in your project because it's what we need to be able to move the new bones.
    Dynamic Bones asset installed
    Check if everything is correct and the skin weight is working properly.
    ​
    Drag and drop the DynamicBones.cs script onto your character mesh or add a new component on the inspector tab. Time for some tweaks.
    As you can see a lot of stuff came out.

    Lots of levers, buttons and numbers appeared which can be a little intimidating at first, but I will try to show you that's it's really easy to get really good results by just adjusting a few parameters.

    It's quite easy to do, but, you will need patience to get really great result as most of the work comes from testing and adjusting what seems wrong.
    ​
    Lots of testing.

    While there are a lot of things you can touch, we will stick to the basics.
    Dynamic Bones inspector menu
    By default, Dynamic Bones gives pretty good results for the bones to interact with meshes and being affected by gravity, but my case is a little bit special, and we will have to adjust it correctly.
    Let's start

    First of all, we need to assign which bones we want to be dynamic. For that, we will select the "Root Bone", that is, the bone before all of our dynamic bones.
    In this particular case, since we want to make the eyes dynamic, both eye bones are attached to the head bone. That is our root.
    ​
    Be sure not to select the end of the bone. Rookie mistake.
    Dynamic bones script
    Testing

    Test, test, test. Move your character. Rotate it. Make sure it does what you want. You can get a lot of different effects by just adjusting a couple of parameters.
    Testing dynamic bones
    This is definetly not what we want. While the eyes move accordignly on the Y and X axis, we don't want to move in the Z axis.
    freezing axis
    Luckily, you can freeze any axis you want, so you can avoid these kind of problems.
    ​
    Now the eyes won't pop out of their sockets, which is, to say the least, nice.
    Eyes in place, but we need to tweak how the eyes move and behave when moved. These are all the options in the image below.
    Damping: Adjusts how fast the bone will come to a stop.

    Elasticity: Adjusts how much the bone is pulled back into its default position.

    Stiffness: Adjust how much the chain of bones will move and bend.
    ​
    Inert: A multiplier for how much the character's velocity is ignored when calculating velocity.
    how the eyes move
    Knowing what each option do, now is time to test. Tweak some settings and try by yourself. 
    dynamic bones eyes elasticity
    For the eyes, we adjusted the elasticity, the stiffness and the damping to get the behaviour we desired.
    ​
    This is what gave us the best results, but of course, every character has his own, and you will have to figure out for yourself.

    ​Once you have your dynamic bones as good as you want, that's it! There's no more to do. Now you can do whatever you want with it; using it on your game or scenes, or upload it to VRChat.
    If you don't know how to do it, check out our guide about how to upload you avatars into VRChat
    Conclusion

    Dynamic bones are a simple yet super effective way to give life to your characters. With just a little bit of tweaking you can get really good results. Making you characters more dynamic and life like.
    ​
    Moving clothes, hair, tails and eyes are just the beginning,
    ​your imagination is the limit here. Be creative!
    Picture
    ​Pedro Solans
    3D ANIMATOR
    ​​Junior 3D Animator improving every day possible. Videogame and cat enthusiast.
    Twitter
    0 Comments

      Categories

      All
      Blender
      CryptoAvatars
      Decentraland
      Decentraland En Español
      Maya
      Metaverse
      Mixamo
      Morphite
      Substance Painter
      The Sandbox
      Totally Reliable Delivery Service
      Unity 3D
      Updates
      Vrchat

      Archives

      March 2022
      July 2021
      June 2021
      May 2021
      April 2021
      March 2021
      February 2021
      January 2021
      December 2020
      October 2020
      August 2020
      July 2020
      June 2020
      May 2020
      April 2020
      March 2020
      February 2020
      December 2019
      October 2019
      September 2019
      August 2019
      June 2019
      May 2019
      February 2019
      January 2019
      December 2018
      November 2018
      October 2018
      September 2016

      Picture
    Home
    Projects
    Assets

    Picture
    Crypto
    Friendly
    Picture

    Subscribe to get some 💚 in your inbox once in a while.

    Follow us and your visit will never be forgotten!
    Picture
    Picture
    Picture

     © 2015-2022 POLYGONAL MIND LTD. ALL RIGHTS RESERVED.
    • Home
    • Metaverse Builder
    • CryptoAvatars
    • MegaCube
    • Blog
    • Decentraland
    • Projects
    • Assets