The artistic and technical processes for generating VRM avatars from Nouns PFP collection
Nouns is an engaging PFP NFT project based on pixel-based characters, a cool collection residing on the Ethereum blockchain.
Our role as a 3D creative studio centers on crafting interoperable VRM avatars derived from the Nouns 2D PFP collection. This enriches the functionality of NFTs, granting token owners the opportunity to use a 3D avatar in virtual worlds. Both 2D PFP and their corresponding avatars are available on VIPE, our interoperable avatars' marketplace.
Based on previous work on 3D models and textures done by another development team, we have generated VRM format avatars for each Nouns collection token. Thanks to them open source project, enabled us to transform the GLB bases into VRM avatars with blend shapes and utility across the metaverse.
Our focus will be on various artistic and technical processes, including workflow conversion, material and texture optimization, rigging and blend shapes. We'll also delve into the configuration of the Blender assembler and automation and code optimization.
The first thing to do when we have collections already created like this one is to analyse what content we have to generate in order to understand what we need to adapt. As this collection had 3D resources made already, the art style was locked quicker and we could focus on other key elements.
To create a generative PFP as 3D VRM interoperable avatars we follow a basic workflow consisting on:
Count the traits and set a technical budget for each type of art piece, geometry, texture and material technical boundaries are set
Prepare an atlas to know which UV region belongs to which trait. We have to take into account the limit of materials we have
Know the number of texture traits and create the textures in each UV region
Create a rig
Create a generation scene
Generate avatars and textures
Note: Nouns were in GLB format, using a code that went through all the GLB files we extracted each textures and exported it as FBX to be able to work in Maya.
Workflow and collection adaptations
The Nouns did not follow with some parts of the original workflow. The first challenge was to find a way to optimize the amount of materials and textures used, as well as the organization of the rig.
Each Noun was made of 4 traits and 5 materials that could be optimized:
We separated the traits by geometry. The Nouns always have the same body but the geometry of their heads changes, so we have to separate them.
The Noun was divided as follows:
Geometry Trait —> Parts
Body —> Accessories, Body, Pants and Shoes
Head —> Head and Glasses
By joining the heads and glasses a problem was generated. There were 234 heads that would need blend shapes. The position of the glasses was specific so separating the glasses from the heads was not an option.
We have solved this problem below in the Blend Shapes section!
With Nouns divided like this, a humanoid Rig was made without fingers. The weighting was done and exported in FBX, then a scene was prepared in Blender for generation. The rig was made in Maya for convenience.
As mentioned before, there was a problem with the blend shapes due to the high number of traits: there were 234 heads.
The glasses had the eyes painted in texture, to add more blending shapes and greater expressiveness if I created a pupil by geometry that could move.
To solve the problem with the amount of work involved in making 234 blend shapes, we look for patterns, selecting in Maya by Uvs we isolate in the Viewport the glasses.
With the glasses isolated, we grouped those that were in the same height position so that we could work on the blend shapes of all of them at the same time.
With a clean nomenclature we create 11 blend shapes:
Once all the blend shapes were molded, they had to be created or, better said, set up the blend shapes.
There are two options here: repeat a process 234 times or automate it. We generate a code to automate this process using the nomenclature for the correct selection and creation of the blend shapes.
For the code we organized the Blend shapes by groups; in each one were the 234 blend shapes of Angry, the 234 of Sad, etc.
The groups are for the code to go through them and select what it needs, if it doesn't find any of them it can be traced more easily if we divide it as follows.
The code made a list with all names of the main meshes (keyword). The names of the meshes of the 234 heads were extracted with another code that printed them on the console.
One time we create a dictionary to store the geometries for each keyword.
We obtain the main head (Base mesh), create the blend shapes deformer and add as target the rest of the geometries of the dictionary according to the name correct.
The code will do it for each keyword that is in the initial list!
Materials and textures
The original Nouns had another problem: it was a bad optimization of draw calls. You guessed: it had to be optimized. The originals had 5 draw calls and 4 meshes, the new ones have draw call 1 and 1 mesh.
Nouns did not follow an Atlas for their Uvs, so they all overlapped. The solution for these cases is to do texture baking in Blender.
First, before starting the bake, we create all the combinations by joining the Accessories and Body textures. This way we created all the texture possibilities for the body, and also allowed us to modify the Uvs of the body by hand so that the hands were in their own region and did not overlap when baking.
For Nouns avatars this bake process would be performed at the end, once the whole generation process had completed an avatar. See the section of this documentation Set up of the scene for the generation.
To optimize the quantity of materials and textures we must:
Setting blender in Cycles and set up the bake options as we need
Note: If the baked texture does not look good it may be due to a wrong setting in Selected to Active Try to adjust the values of Extrusion and Max Ray Distance until the result is correct.
Duplicate the created Noun and put the final materials, in this case ONE, with a texture created from Blender
Prepare the model to be baked. Nouns do not have the Uvs ordered, so you have to make an Unwrap or Smart UV Project. Configure the options according to your needs for a better result. Sometimes models come with separated vertices due to GLB conversion or other causes make sure that the geometry is optimal in order to automatically reposition the Uvs correctly.
Select the meshes
Start the baking process. As a result, we will obtain a texture with all the information contained in the other 5
Set up of the scene for the generation
To prepare the generation scene we will use Blender. The most important points are:
Geometry traits are organized in collections, in this case “Head” and “Body”.
All materials must have their textures assigned with a clean nomenclature.
The generation can be done randomly for new traits or, in this case, following .json that collects the information of the traits that already exist.
Configuring and exporting the VRM
In order to configure and export VRM in Blender we will need an Add-on. Once we have it if we select the skeleton and go to object we will find VRM. Inside will be all the options to configure.
We will have to configure a skeleton. If we have a shared skeleton we will only need to configure it once in the generation scene.
The blend shapes are configured below. We must select the mesh and the shape key corresponding to each Preset.
Generate new traits
The Nouns are a collection that is updated every day with a new Noun, some do not have the 3D model of the head and must be made from 0. So that the entire collection is equal under the same rules are created!
Process automation as the key of success
The Nouns collection had a technical difficulty due to the number of head traits and the position of the glasses which was different for each head.
However, several important points can be summarized in order to be able to work with any existing collection that needs to be optimized and transferred to VRM.
Clean nomenclature, for helping us to automate the processes more easily
Correct analysis of what we have and what we want to achieve
Detect the steps that are repeated in each trait to automate the process as much as possible
Animation Lead and Environment Artist
Passionate about 3D work since childhood, rigging as my main passion