So the first thing you need to know to create different animations in Blender, is what are the actions? How to create one? How they work? and How do you delete them?
An Action in Blnder is a tool to record and contain the data. As everything else in Blender, Actions are data-blocks. So when you animate an object by changing its location with keyframes, the animation is saved to the Action. That way you can create as many animations as actions you want.
Each Animation could be a different action on the character or object. For example run, iddle, shoot....
Then to view the action menu, change the content of the page to the Dope Sheet and then you can change it to the Action Editor.
Changing the window to dope sheet
Menu to change to the Action Editor
When you have the editor in view, you can start creating actions, but before you do that, you must have a couple of things in mind:
Once you have created and armature (Top corner Add→Armature), you are ready to create actions. With the Armature selected, press the New button, change the name of the animation and don't forget to press the shield (If you do not press the shield, Blender will not save the new name, and you will lose the changes you do with the action).
With the armature Selected create a new action
When you finish with the action, close it by pressing the X button at the right, so you access to the first menu on the editor and create a new one from scratch. An Important note if you want to delete the action because you don't like it, don't need it or if you duplicate it by mistake, DON'T press the X button alone (as stated before this closes the action but it doesn't delete it) and DON'T press the delete key nor the backspace one this won´t delete it either.
The ONLY way to delete it is to press Shift + the X button on the right, then close and reopen the file to refresh the motor, if you do not close the program you will still see the actions that you deleted.
From Maya To Blender
Now let's say, that you want to work in another software like Maya, This time the change is easy, but you have to follow a series of steps.
Once you import to blender; the fbx will create an armature in the scene with the animations in it. Then all you have to do is :
Exporting and Viewing
When you export the model, check a few options in the presets:
Web with the different animations to select
Getting your VRM ready
If you bought your avatar from CryptoAvatars.io or you already have a VRM file, congrats! You don't have to do anything here, you can go to next section.
This part of the guide is only for people who want to use one of our +100 free avatars or any fbx model for their meetings.
Get your FBX ready, because you are going to turn it into a VRM file. For this part, you will need Unity 2019.4LTS and the last avaible version of the VRM plugin for Unity, which you can download right here: https://github.com/vrm-c/UniVRM/releases
Once you have both, just create a new Unity Project and drag and drop the UnityPackage containing the VRM plugin.
Get you FBX and texture into the project too, and create a new material. Make sure everything is correct. Things that you should be on the look out are:
Now everything is ready, is time to export.
Select your avatar and go to the top left of your screen, to VRM → UniVRM 0.58 (or whatever version you are using) → export humanoid.
Set the language to english, if you haven't already, and add a title, author and version of the avatar.
Now click export and save it wherever you want.
Nice! You now have a VRM file that you can use on the next steps.
Basic VMagicMirror settings
Once you start it up you will see a green screen and another window in Japanese. We will set the language to English, of course.
Now is time to load in our VRM file. Use the "Load File on PC" button to find your VRM file.
The avatar should appear out of frame. Right below the load button there is and Adjust Size by VRM. Click it and should make everything a bit more on focus.
It is very likely that the avatar arms might look broken, we are going to change that now.
Go into the settings menu and into the Motion subsection and you will see Arm menu, and there are 2 options you want to look "Waist width" and "strength to keep upper arm to body [%]"
You can tinker with those 2 options until you get a desireable result.
More VMagicMirror settings
The basic stuff is ready to go, but, if you want to change anything else, these other settings might help you tune what you want.
Changing the position of the camera
Open the settings window and find the "Layout" menu. Here you'll find all the Camera settings you need.
By checking "Free Camera Mode" you'll be able to move the camera all you want inside the View Window, where your Avatar is. Clicking with the middle mouse button allows you to move the camera and the right click button rotates the camera around. Using the scroll wheel up and down you can zoom in and out. You can also change the Field Of View (FOV) just below the "Free Camera Mode".
Changing the position of the devices around your avatar
On the same "Layout" menu, you will see the "Devices" submenu, and just check "Free Layout".
Once that's done return back to the screen where your avatar is, and you will find gizmos in each of the devices your avatar interacts with.
Use the gizmos to reposition the devices to your liking. On the top left you can change if you want to change the position, the rotation and the scale of the devices, as well as changing if you want the gizmos to use the device as reference of coordinates by selecting "Local" or use global coordinates with "World".
Finally, if you want to go to the Default camera, just click the button on the bottom and the camera should reset itself inmediately.
Turning off all devices
Now, if you don't want to have any device and just want your avatar to stand up and talk, you can disable all the devices on the same Layout menu. Just scroll down until you see the Device Visibility and there you can see different options you can turn on and off. I would suggest that if you turn off all the devices, you also select the "Always-Hands-Down Mode" in the Motion Menu.
Changing the background
You can change the background color if you dont wan't the bright green color of the chroma by going into the "Window" menu and using the Background Edit color option. Now is up to you how you want the background to look.
You can also make the background completely transparent by going into the "Basic settings" just above the Background submenu and toggling the "Transparent Window" option.
Setting up some animations
Saving and loading configurations
Do you have your place set up and want to save it for other time? You can! Just head back to VMagicMirror Menu, not the settings, the menu where you can load your avatar, and on the bottom right, you will see different options under the name "Setting Management".
There you can Save, Load and Reset to default if you want.
Saving creates a .VMM file with all the information inside about how everything is set up. Once you have that file saved on a folder, you only need to click on load and everything will set up correctly.
Now is time to set up Zoom. This one is easy and fast. Since VMagicMirror is a program, you must use the "Share screen" option instead of a webcam. Find the window your avatar is in, and just click share.
Be mindful, if it ever says that the screen sharing has stopped, because you minimized the screen or any other reason, you can always restart it by clicking Resume Screen Sharing on the menu toolbar that appears on the top or botton of your screen. This will make that your VMagicMirror screen is always sharing to other people even when you interact with other programs.
Other Options: 3teneFREE
Is a free, VRM compatible with similar characteristics to VMagicMirror. One of the differences is being able to highly customize the background, with the possibility of even adding other 3D models to the scene.
If that still doesn't work, it's possible that the blendshapes are not correctly set up for the VMagicMirror.
We are going to fix that, and we are going to need Unity 2019.4 LTS and the lastest VRM plugin for Unity which you can download it here.
Just download the UnityPackage file and drag and drop into Unity to install it. Now do the same with your VRM. Drag and drop it into Unity.
When it finishes loading all the file, it will appear a bunch of folders, referencing the VRM.
One of these folders is called "Blendshapes". Enter it, and it will appear a bunch of different names. For the sake of simplicity, the only ones that we will take a look are:
For example, if you are on A, you need to find "vrc_v_aa" since thats the one that matches.
Same for every other:
Drop it into the scene, click on him, go to VRM → UniVRM 0.58 (or whatever version you are using) → export humanoid and just click export, since it already has all the information inside.
My arm is not bending correctly
Unfortunately, VMagicMirror is still fairly early in development and this kind of things can happen. Just the way some of the IK are set up makes it impossible from our end to fix these kind of problems.
If changing the arm parameters like I said before doesn't make the result better, you might want to try other program for a more permanent solution.
What is Async.Art?
Well usually, when you create a piece of art, you create an unique piece and upload it to the site. there you can proceed to sell it or keep it. However what Async tokenizes are two things:
Think about it like photoshop, you have your file that has your work, and you divided it in different layers to work more easily, This not only applies to photoshop but also to video edition software or even 3D.
Cube first steps:
So on the first steps on this project I tried a simply approach and create an uv map that allowed to test how can we divide the cube, in order to see how can we divide the different layers.
At the same time we decided on the shape and style of the cube, we finally decided on the cube, because we couldn’t forget that this was thanks to the Megacube.
With a similar style that is called Outrun style or retrowave, chracterised by its Dark backgrounds and neon lights decorating the scene.
Once we tested the layers, we decided on divide them in three layers which 3 different states each.
Style & Final result
Once we did that, it was time to prepare the layers and divide them, so that the owner could change not only one thing but several along the different faces.
Finally we decided to divided it on the background, the different faces and the frame. if you want to test how the texture changes, you can test it here:
And here is some of the possibilities in 3D.
Cube with different layers
This was a fun prize to make and make discover this new way to create art. I had heard about it before, but it was when I tested it that I discover its possibilities.
If you want to see this cube you can go to decentraland and see how it changes.
"Create, explore and trade in the first-ever virtual world owned by its users." With this welcoming phrase Decentraland invite us to dive into one of the first blockchain-backed platform that enhances an alternate reality to express our ideas and projects into the world.
Launched in February 2020, Decentraland has seen its potential grow as a place to display and showcase cryptoart. Nowadays you can find NFTs placed all over the Land, some of them inside buildings specifically made to gather them, others in parks and open areas and even some of them can be found flying around.
A caption of different points where NFT art can be seen, near coordinates 15,44 in Decentraland
Listing your NFTs, knowing your limitations
The most important thing to do at first is to create a list of your desired NFTs to be in place. Decentraland doesn't have a hard limit regarding NFT placing or video streaming as they count in the Entity limitation.
If you don't know your limitations, you can find a lovely spredsheet here: https://docs.google.com/spreadsheets/d/1BTm0C20PqdQDAN7vOQ6FpnkVncPecJt-EwTSNHzrsmg/edit#gid=0
This is the list of assets we are going to display in:
For our example we will use a splendid land of 1x1. Here we will build our environment and decide where to place our finest cryptoart pieces. For this small piece of metaverse the platform allows us to have a maximum of 10.000 triangles, 200 entities, 30 bodies, 20 materials and 10 textures in our exported project zip folder.
Caption of the ECS limitations spreadsheet
To have a better understanding of limitations, we have to take into account that every "entity" (called in Unity gameObject) that has it's own independent Transformation node (Position, Rotation and Scale) is considered an Entity. Even if it doesn't have a 3D mesh (Body) attached to it.
This means that every NFT, video stream or model will count at least as one entity each.
So for now we know that our NFTs will "size" 4 Entities within the limitations and we know that although they are all animated, only one is a video source and it will be the one to do a video stream. The other three artworks are in GIF format, supported by the Picture Frame by Decentraland.
Extracting the video source
There is one simple rule in Decentraland to do a display of NFT picture frame and it's that: if it is in OpenSea, it can be shown. This is due to the API used to extract the blockchain data to display the artwork that requires the Entity to include a NFTShape stating the Smart Contract Address and the Token ID of the artwork. This flexible set up allows you to incorpore to your land NFT assets from SuperRare, Rarible, KnownOrigin, Whale, MakersPlace and many more!
For all the artworks that remain apart of the current admitted formats (Image file formats), we have to do a video stream of them but we must know that even video streams have limitations.
The formats currently supported by the Decentraland API are .mp4 , .mov , .ogg , .webm.
Note: The inclusion of the .ogg format makes possible to only stream audio indeed :)
To extract the video source of an artwork it's just as simple to catch the static video source some platforms give you by right clicking on the artwork
Original artwork by: Bananakin
Source link: https://storage.opensea.io/files/28b5a343586b597f755148a85d8edd23.mp4
With this source our artwork can be streamed.
Deciding the placing
For this test we have developed a small environment scenario where we place a "dummy" game object that will indicate the complete transform data we need, position of the video stream, its rotation and its very own scale. We have named this personal beacon "COG VideoDisplay (1)", this code name will make it easier to find it in the game.ts code the export processes for us.
Our view with Unity, you can already spot the place where we will place a video stream
The view of the same place for the video stream in the game.ts code
As you may have noticed among the code, rotations are not set in Euler angles. Instead unity runs angles in Quaternions to avoid the gimbal to happen. If you want to input Euler angles remember to put .eulerAngles after rotation to indicate that your values are being written differently.
More about Euler vs Quaternions
Placing the chunk of code
As the official Decentraland documentation follows, the code to do a stream goes by stating the following lines:
This chunk of code would create a video in the position 8,1,8 located in the land with a size of 1 meters square.
To adapt it we extract the code needed to our scene, specifically we need the part where a plane is spawned in the world position.
This is the vanilla code, stating that "entity77588n" is called "COG VideoDisplay (1)". This is the Entity we need to work on to make it stream a video on it. For it we add the following lines:
Instead of spawning a new entity, we have decided to set a material on the current one and tell it to have an specific material (that contains the VideoTexture) and describe the behaviour it has to have when the player interacts with it.
Local deploying and debugging
After setting our code we can deploy it locally and see if Decentraland runs our chunk of code, luckily for us this works and so the video can be seen in motion alongside the other NFTs.
The main counterpart to stream a video it's the fact that it is not an NFT in essence, somewhat we could say that breaks the spirit of the blockchain but it's the only way possible to do this at the moment.
Another point against it's use is the memory usage and the overload that may cause to play raw videos or stream to much data into Decentraland. Because it is a platform that it is already streaming a lot of information, overloading it with additional videos and images can be problematic if you look to have a smooth experience.
Additional features to your stream
You can also set different properties to the stream that by default are not enabled. Or you may want to start the video in a specific position or stream it slower. This is the complete list of things you can add to you custom stream code:
Streaming a video source in Decentraland is, among other things to develop to your land, simpler than it can be thought at first glance. But be careful when placing multiple streaming videos without stop as they can overload your scene (and your neighbouring lands too!)
I purr when you're not looking. I'm passionate about environments and all the techie stuff to make them look rad. Learning and improving everyday to be a better hooman.
Adding the bones
Of course, the eyes won't move by themselves, they need a bone that will make them bounce. I'm sure you followed our rigging tutorial to easily rig your character with mixamo and fix any nasty problem, if not, be sure to check it out here:
Fix and reset your Mixamo avatar rig
Using the Create Joints tool, add your bones wherever you want. Make the bone chains as long as you want so it looks as smooth as posible.
Want to give your character even more personality? Use blend shapes visemes to add facial expresions while talking. You can easily follow our guide here
Create and upload a VRChat Avatar with blend shapes visemes
Now, export your character making sure you have the skin weight correct and the skin checker box ticked.
Time to bounce
Next stop, Unity.
Be sure to have the Dynamic Bones asset installed in your project because it's what we need to be able to move the new bones.
Check if everything is correct and the skin weight is working properly.
Drag and drop the DynamicBones.cs script onto your character mesh or add a new component on the inspector tab. Time for some tweaks.
By default, Dynamic Bones gives pretty good results for the bones to interact with meshes and being affected by gravity, but my case is a little bit special, and we will have to adjust it correctly.
First of all, we need to assign which bones we want to be dynamic. For that, we will select the "Root Bone", that is, the bone before all of our dynamic bones.
Test, test, test. Move your character. Rotate it. Make sure it does what you want. You can get a lot of different effects by just adjusting a couple of parameters.
This is definetly not what we want. While the eyes move accordignly on the Y and X axis, we don't want to move in the Z axis.
Eyes in place, but we need to tweak how the eyes move and behave when moved. These are all the options in the image below.
Knowing what each option do, now is time to test. Tweak some settings and try by yourself.
If you don't know how to do it, check out our guide about how to upload you avatars into VRChat
Dynamic bones are a simple yet super effective way to give life to your characters. With just a little bit of tweaking you can get really good results. Making you characters more dynamic and life like.
Moving clothes, hair, tails and eyes are just the beginning,
your imagination is the limit here. Be creative!
Gradients with position map
Gradients in substance painter can be complex to do if we don't use the right tool, for this example we are going to make a gradient from red to gray in this character's pants.
To start we will need to have the position map baked, If it isn't, you can bake it directly in substance painter.
We start by creating a fill layer on top of the base color layer.
We create a mask on the gradient layer and add a 3d linear gradient generator.
Inside the generator we have the options of 3d position start and 3d position end, for the gradient to work correctly we have to pick the color of the model's position map.
When we return to the material display mode you will see the result of the gradient.
Baking lighting is a very simple and very useful process that can help us to highlight parts of the model and if the final model is not going to be affected by real lighting it can provide a more realistic touch.
We will start by creating a fill layer with the properties that we want to give the light to which we will add a light generator.
Inside the generator we will find the properties of the generator such as the direction and the height or the intensity.
Once we have everything configured to our liking we will have to change the layer fusion option in case the light can work well with overlay, soft light although the best way is to try which one best matches the result we are looking for.
The anchor points are used for the intelligent masks to detect deformations of normals made in other layers that are not mapped to the normals map.
In this example we have a layer with height and we want the mask that we are going to create next to paint the borders automatically.
To solve it we must enter the options of the smart mask and modify some attributes.
The first is in microdetails change the first two parameters to true the second is at the bottom, in Micro height, there we select anchor points and look for ours (its recommended to name the anchor points correctly) after doing this everything should work correctly.
Work the roughness
The roughness map is one of the necessary ones in a PBR material, and also one of the most important when it comes to giving detail to a model, so it is important that in the PBR models the roughness has work and detail.
In the first image we have a model with a practically flat roughness, in the second image we have the same model with a much more worked roughness.
To achieve these results we can use smart masks, occlusion and curvature masks and of course textured brushes to give wear details.
Although it may seem very basic, the brightness changes of a material are what help to give a model realism.
Layer blending mode