Having the avatars being featured by VRChat.
Unity's social media account reposting our content during the challenge.
Increase of the followers both on Instagram and Twitter.
It all started back in September of 2018. I wanted to start investing time into develop new game art styles using Unity, I was not sure how to start but then I found about the 100 days drawing challenge by Amanda Oleander on her Instagram.
Her challenge and commitment inspired me so much that I did my own version of it by doing 100 characters, 1 character a day for 100 consecutive days.
The condition for me was to make an Instagram and Twitter post with a moving character every day, so I had to create a steady workflow that could work for all the days of the challenge.
The challenge process
To succeed on a challenge like this you should try to define a process, and try to follow through it everyday, this will help you focus and will slowly reduce the time you have to dedicate to the challenge overtime, since your brain will be learning and adapting to the tasks.
If you don't know how to set up a process, is okay, most of the times processes are the result of repetition, so just start, do it once, and then write down the steps you made to get to the final result, then the next day try to repeat them. Over time, the process takes shape, evolves and improves.
During the 100 days a lot of people asked us how we managed to make 1 per day so we'll be taking a general overview of the character creation process. Most of the characters follow this scheme.
- Fixing retopology with MayaLT
- UVs and Textures
To create the texture, we like to use Adobe Color in the studio. Easily help us find color schemes that work for out characters.
- Rigging and Animation
- Unity scene set up
Finally we got to the point where I wanted invest more time at. Unity. The hole point of this madness was to force myself to use Unity as a quick tool to develop new visual concepts and ideas for future projects.
Not gonna dig into every detail of what I did with Unity, but there are a couple of tools that helped me to save time and get great results during the 100 days.
- Final steps
- BONUS: extra tips to iterate faster on a challenge like this one
Uploading them to VRChat as avatars
Halfway through the challenge we came up with the idea of giving a second life to all the characters by transforming them into avatars for the Metaverse. They were all already rigged with Mixamo so we knew by experience that they could be used, at least in VRChat.
Months later I decided to give it a go to the avatar idea with the help of 2 interns in the studio.
Initially we just wanted to give them simple rig and upload them into VRChat, but after a few days into the work, we got reach out by the VRChat team, they loved the variety of our characters, and they suggested us to give them some extra love, by adding visemes and optimising them for the new Oculus Quest release, this way they could be used by even more players.
So we improved them and created a VRChat World to gather them. I must admit that Investing some more time into adding visemes made the characters way more interesting and fun to use!
Here is a screenshot of our Notion.so board in the middle of the project.
There is a lot of documentation already about how to upload avatars into VRChat, so we wont be covering any of it on this post, but, we'll be releasing another blogpost with some tips for optimising avatars for Quest using MayaLT later on.
Next Steps for this project
As you can see in our roadmap for now our most closest goal is to keep uploading all the characters into VRChat with visemes, we're really close to have them all up and ready to use. At the same time we'll be improving the world too.
After this our next milestone is to tokenize this avatars using the blockchain, our final goal is to release all model files for free to download on our site, as "Open source avatars" so anyone can use our avatars in any virtual world platform or project they're developing.
During the time I was writing this post, the guys at LIV reached out to us to use the avatars for their platform, so you'll be able to use them for streaming Beat Saber soon.
If you have a VR platform or a project you might wanna use our characters at, feel free to reach out so you can have a test before we make the open source release.
If you want to see the characters during the challenge, you can check our Instagram and Twitter accounts.
Daniel García (aka Toxsam) is the founder and creative director at Polygonal Mind.
For and a half I had to work in a vertical slice of an Escape Room building it from the ground up, while focusing on simple mechanics with even more simple input controls and a low poly aesthetic to fit among Decentraland's smallest parcels limitations.
I had the pleasure to work with Laura Usón as an Artist, Javier Diez as a Level Designer (go give them a follow, they're both great) and some other friends of Polygonal Mind as testers, who made sure the level of difficulty of the puzzles stayed consistent throughout the experience.
I can't stress enough how important having a fresh pair of eyes is, specially after working a few days on something to get rid of your tunnel vision; grab some friends from all the backgrounds (hardcore gamers, mobile gamers, casual gamers, non gamers...) and make them play your stuff!
First things first, what's an Escape Room?
If you haven't been living under a rock, you probably already know this. But for the people out there who still don't know about the existence of these fun group experiences, an escape room is a themed timed challenge where with the help of other people you try to solve all the puzzles necessary in order to get out of that room before the timer goes off, usually lasting around 60 minutes. There are of many types: more logical, more physical, with actors, military themed, zombie lab themed, and so on.
Beating the designer's blank page block
Level Design and iteration
After deciding what kind of puzzles we were going to use, we proceeded doing a simple block out for level design. Personally I prefer to do it in 3D and then test whether is it good or not by test and iteration, but this time the team decided to do an overview on paper first, and then translate it to a 3d block out.
Once that was out of the way, we made a list of the props that would be needed for the room, in order so Laura and I could work at the same pace, she made the models and texturing, and I took care of the sounds, puzzles and coding.
I started using the FPS Controller from the Standard Packages from Unity. I tweaked it to disable jumping and camera bob and then began with the puzzle experimentation.
Here's a video of how the sound slider puzzles looked in the beginning (they were wall levers, and the controls were click + drag instead of just left click or right click):
Here is a video comparison where you can see the progress on the sound puzzle.
I went through a couple of iterations, where the sliders were still levers but a lot smaller than the wall ones in the first video, and they weren't attached to the radio, but this confused players.
Ultimately, the players associate the sliders with sound, and it's a lot easier to understand that higher frequency sounds go up, and lower frequency sounds go down. This wouldn't have been able to get fixed without play testing.
Every puzzle I went through the same iteration process just showed above:
The cherry on the top
All the team made a great effort to make this project work in a short period of time, we used Unity because it seems to be what Decentraland is going to end up using. Most of the post processing tweaks we added won't be available to be used in DLC anytime soon, but still we wanted this small demo to shine.
I had a blast working with skilled peers and although I usually work as a 3D artist, I love doing some coding and SFX work from time to time.
Small challenges with tight deadlines push you out of the confort zone, and make you focus on what's important, making it work and shipping it on time. You cannot get lost on polishing or get stuck into improving your code, if it works, just move on to the next thing on the list.
This was an awesome experience and I'd be pleased to do more stuff like this in the future!
Soon we'll be releasing a link to download and play the Escape Room :)
Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.
Passionate about videogames, vivid tutorial drinker and cat lover.
This post is about a mobile premium game we are working along with Crescent Moon Games, called Ravensword Legacy, which is currently on development.
This week I had to work revamping the awesome characters that were already made by another team member in order to allow them to talk. After some research, I found a plugin for Unity called LipSync Pro that allows you to add some keyframes for Audio Clips, so that the character that is talking moves their mouth accordingly (it allows for some other blendshapes, like blinking or yawning, and even some presetted expressions like angry and happy among others, so you can assign each expression to each line of the character).
The core of spoken languages
For this kind of work, game developers usually group the phonemes together, for example the letter "k" in "key" sounds the same as the "c" in "car", hence needing only one phoneme for that sound. Same with "m", "b", and "p" and so on.
Adapting to the new needs
I proceeded to modify the models and open their mouths, add the inside of the mouth (commonly called mouthbag), tongue and teeth. I also had to modify the textures so the teeth, tongue and mouthbag were textured.
After this, I duplicated and modified three times the resting pose for the A, E/I and O phonemes. As the game is low poly and has pixel post processing and a limited colour palette (sometimes even as low as 8 bits!), too much fidelity and/or fluidity would make it look uncanny.
Each of these heads were exported as a single head with 4 blendshapes, using the modified mouth's ones as targets for the said blendshapes.
Setup of the system
I created a LipSync info .asset file from that Audio Clip via LipSync's Clip Editor (shortcut Ctrl + Alt + A) and started adding the phonemes that matched with what the line was saying. Having only 3 phonemes really sped up this process, otherwise it'd have been too tedious. After that was done, I saved the LipSync info .asset file in the same folder as my Audio Clip.
Each of these black markers means that the mouth will change to the specified phoneme at the specified time. Once this was done, I went back to the prefab of the character head, added the LipSync script and assigned the head mesh as the main mesh, and the teeth as the secondary mesh. This means that the head blendshapes will drive the teeth ones too. I also assigned the Audio Output of this character to be the origin of the sound of the line, and dropped it into the slot.
I then specified which blendshapes were to be assigned to which phonemes so that LipSync knew what blendshape it had to change everytime the timeslider passed through a phoneme marker.
And so this is the end result! It was a very fun experiment and I'll probably end up using this method again in the future for personal projects.
Please be aware the audio clip was a test one to make sure the plugin worked and it's not intended to be used in the final product, since it's a dubbed line from another game.
If this was helpful to you in any way please consider sharing it with your gamedev friends, we really appreacite your support!
Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.
Passionate about videogames, vivid tutorial drinker and cat lover.
A little bit about the project
Now that we know more about this virtual world, we can officially start.
Breaking down the project.
The Aetherian Block Museum is an ambitious project and to have a better control over the development sprints we divided it all in 3 different stages.
- The structure
This would be the over all shape and feel of the building, and included everything that cannot be moved or duplicated inside the scene, to have a real life example, this would be the job of an architect.
We define as props anything that can be duplicated in the scene and doesn't belong to the structure, a bench, a painting or a vending machine would be a prop, in real life, this would be the work of an interior designer.
- Animations and final polishing
After everything is done we will invest some extra time giving life to the scene with animations and interactions. In conclusion, the job of an in real life wizard.
Since this project it is still in development, on this post we're going to focus on the structure.
A Basic outline
When working on a Decentraland building project, is important to note that there are technical limitations everywhere, on size, on polycount, textures,... it all depends on the extend of the terrain you are going to build on.
Knowing this, when I have size limitations I usually start creating a cube that has the maximum size and use it as a visual reference to know where is the limit.
Learn from the Masters, Gather References
Never understimate the power of the references, for getting a general idea about what we want to transmit or the look we want to achieve.
I like to start by entering Pinterest and look for some references, in this case cyberpunk style, since this district does not have yet an exact design, I looked into the most "iconic technological futures", movies and videogames, like Blade Runner, Tron, Rune and Ghost in the Shell were some of the major references on this project.
Also, one thing that I tend to do, is gather different images that have some detail that I like, and I think that could fit in the project, regardless the overall style, this way I can mix new ideas into it.
Block the Idea out!
Once we have some ideas about what we want, it’s time to make some visual MVP (Minimum Viable Product), at Polygonal Mind we call this blockouts, with them you can see the overall shape of the building helping you visualize the final product, in this case of the museum. Generally, in this stage, the ideal thing is to make at least 3 totally different variations, so we can experiment with different shapes and ideas and see which one could be the most feasible.
In this case, three buildings were done, each one with a different shape, so that in the end we can reach the most pleasant form while taking advantage of the maximum size.
Seeing that the pretended design is technological, I added some emissive lights. Here the contrasting colours help visually how these lights could work.
After this first stage, we decided to post the results on the Decentraland and Aetherian Discord servers to gather some feedback.
The devil is on the Details
Once we have chosen the most viable option, in this case the chosen one was the second option, we pass on the details, on the first stages of the modelling, I tend not to worry about the number of the polygons, and I just look for the overall shape.
On this stage however I remade some of the pieces of the model, that way I can create drawings on the model far more controlled way. A good example of this are the exhibition floors.
If there is a feeling, there is a style
As I explained before, Decentraland is divided in various districts, each one with its own style, this building is destined to the cyberpunk district. However this style can encompass a lot of different feelings.
So in this project we could test different color palettes. One similar to the aesthetic of tron, another more dark and grim and the last one more natural.
As I mentioned in other articles, Adobe Kuler is a great tool for this, you can search or create your own palettes that can fit your project very quickly.
Again, we made 3 different variations and went to the DCL and Aetherian Discord servers to gather some community feedback.
Once we have the basic colors we can pass onto detailing the textures using the colors that have been selected, in this case the dark palette was the chosen one.
This was a basic outline for my process when it comes to create buildings for Decentraland, though depending of its needs this can change slightly, For example for organic objects I tend to use Zbrush.
Nevertheless, never forget the musts. Reference, Blockout and Detail.
See you in the next case study :)
Laura Usón is a 3D artist working at Polygonal Mind's in-house team.
First things first, Set up a moodboard.
Before opening any 3D software, we look for references and place them together...
When it comes to picking the right colors, I like to visit Adobe Colors to quickly make my own set of color palettes or to create one from the colors of the images from the references.
These are some of the palettes for this project:
Opening Substance Painter
To open the project, we can use the default template for Painter since we are only going to use the albedo on this project, but if we want use some opacity to add some transparency on the model I prefer to use the left one.
It's not necessary to decide it right now since you can change it any moment from the shader settings.
It's possible to import the normal maps and other additional maps when you start a project, but you can get a pretty good result from the baker mesh maps in the Texture Set settings.
One of the things I like to do, is to add a Fill layer with a base color without detailing anything to see that the composition works. At this time, on the project I usually work on the Base color channel since the other channels are not crucial on this moment.
Another thing that I like to do is work with fill layers and paint the masks, that way if you want to change the color to try another tone you can do it easily in the property fill. I consider this one of the good practices in substance so you can leave the paint layer for small details later on.
To add some texture to the base color, I duplicate the Fill layer and change the color to a darker tone to see the changes. Then I add a white mask and start to tweak it.
I like to add a generator with a mask editor, since it gives a really good base for a mask, and rise the ambient occlusion and curvature.
Then I add a sharpen filter to harden a little the edges of the color, and finally I add levels so I can invert the colors, here is the difference:
Here is the final the result (without the other colors):
Then I duplicate again the base color and chose a lighter color to add some light, I do this by adding a white mask and using the mask editor again. But this time I work with the textures too.
As you can see in the image I put the first texture on Multiply and the Texture 2 on add/sub, For the textures on this case I used ones that come from substance since it has a great library but you can import your own resources from File.
When I paint I like to add some contrast to the base to balance the output of color, to do this add a fill layer in soft light or overlay and modified it with the light generator, to give it some direction and degradation of it, the light generator usually I leave it on Divide.
For this part I recommend to use a normal layer, so you can paint it over, since you can pick the colors from the base created before with the brush. For the brush itself I usually tweak the flow and the stroke opacity and leave it at half.
Be sure to check that the two of them work with the pen pressure since by default is deactivated. Then on the properties tab I tend to use the spots brush and I change the alpha to dirty blurry dense. I found this to be one of the best option since is round but without a perfect shape.
As you can see on the layers on the image, the next step is to separate and change colors, dor the final layers, that are on top, the stroke opacity and flow tends to rise up, since the hard lights are on top of the surface.
Morever, in this case I have to use the blur filter so I coul give the Ice cream the fluffly look to the Ice Cream.
Aplication on the other Materials
For the rest of the items on this model I applied the same principles explained before. For the chocolate for example I created a Fill layer with a dark color and added a black mask, then I painted with white the with 0% hardness, until I got a ramp of browns as a result. Same with the strawberries.
For the wrapping paper, I used a stencil to write the letters on the paper using a Fill layer with a black mask, then on the brush I load my alpha and the paint on the position I want.
Working with another Maps
When you want to work the other maps like opacity or the roughtness, Create another layer that has that one active and the rest inactive, that way you can control better the results without worrying about the rest of the maps.
All in all, Substance Painter is a really powerful tool, that can be used for a great variety of projects. Being able to hand paint the textures directly on the texture sets, different from Zbrush that paints vertex or from Photoshop that does not allow you to see the model in 3D.
Good tool and Good luck.
Laura Usón is a 3D artist working at Polygonal Mind's in-house team.
Case study: Morphite
Crescent Moon Games is a well established publisher on iOS and Android that focuses on premium games with a huge emphasis on good quality graphics. They leaded the game development followed by the game development team at We're Five Games, who took care of the programming side and animations, Blowfish Studios a game development company that published the game on consoles, various freelancers for level design and music and us, Polygonal Mind, who were in charge of making the 3D content of the game.
Morphite is a casual atmospheric FPS mainly inspired by Metroid prime and No Man’s Sky.
The premise was to have a main story with fixed planets and a procedural universe, meaning, endless planets to visit and explore filled with different animals and vegetation that you can kill and scan, oh and it had to run on mobile.
Morphite took us a year and a half since the first art test that we made for the game to the final game release, there are tons of problems we solved and dozens of stories to be told about the game, but this time we are going to focus on.
The lowpoly look and colors
One of the reasons this game is made by big flat polygons is optimization, less polygons, less draw calls, also most of the models of the game doesn't have any textures so every color you see is a material, and every material changes its hue based on the planet you are visiting.
This is applied to animals, items, characters, trees, environment, dungeons,.... probably everything except the weapons and main characters.
Here are some examples of the same animal in different planets:
The procedural levels
One of the most ambitious features of the game, being able to visit an endless number of planets was also one of the most challenging ones, we started by creating boundaries on the exploration, meaning, inside a planet you can just walk or use land vehicles and the only way in or out of this planet is by using your drop pod, this allowed us to use a procedural level system like you would see in games like Diablo.
We had to be careful to make the terrain chunks connectors look natural so the player could not see them easily, but also they had to be fixed so we could add doors, walls and caves between them.
We also separated the terrain from the walls, allowing us make walls variations for each terrain chunk.
Thanks to this we could use the same terrain piece various times without feeling repetitive and also repurpose them as caves some times.
The procedural creatures
If you play Morphite you will find that there are dozens of different creatures, critters, enemies and animals, this is mainly because we made a huge ton of them, but, also because the way we made them allowed us to mix between creatures and create easy variations.
After deciding what need to be done we used Zbrush to make the basic body shape.
Used ZSpheres to make a quick topology.
Maya to separate the body parts, fix the topology, add materials and ready for Unity!
Having the animal broken down into separated parts allowed us to make new animals out of already done animals parts, make body parts variations and also saved us some skin weighting time.
Most of the wild life in Morphite has at least 6-7 head variations 4-5 body variations and different limbs.
The feline has up to 17 head variations!
The fixed main story levels
Morphite has 16 handcrafted story levels, most of them planets that are a result of a collaboration between the whole team and a well defined workflow, even though we all worked on remote.
First the awesome level designer Mike Madden, made a blockout of the level with cubes inside Unity.
Then we took those cubes into Maya, and organise them into regions.
Afterwards we take every region to Zbrush, and using Dynamesh we are able to sculpt the terrain out of those blocks.
Using Decimation Master we reduced the polycount of the terrain to a desired amount and export back to Maya
We used Maya to fix the non-good looking topology that polygon reduction tools like Decimation Master usually make, plus adding some special details that are easier to be done on a polygonal modelling software.
Also adding materials and exporting all as separated FBX files.
Now time to import the terrain into Unity so the game designer and the programmer can add functionality to the level.
Morphite went out on iOS on 20/09/2017 as a premium game. With a great Appstore feature for 2 weeks.
Having a total of 41000 purchases that generated more than $200000 in revenue.
Months later it was released on Android as a free game with in app purchases.
Having more than a million downloads that generated more than $40000 in revenue.
The game was also ported for Steam PC by We're five games team.
And it got ported to Xbox, PS4 and Nintendo Switch by Blowfish Studios.
Morphite is a huge mobile game, not just because it has an open world to play, but because it has an endless amount of them!
On this case study I focused on the most relevant parts when it comes to the visual side of the game but there is way more to cover like procedural space stations, vendors, ships, humanoids, side quest...
This game was made by a remote team, working on different time zones and continents through collaboration and dedication, and I'm really proud of all the work the managed to ship.
Daniel García is the founder and creative director at Polygonal Mind.
Have a cool project you'd like to discuss?
Let's talk about it!
This post is about a mobile premium game we are developing in-house, called Ma'kiwis.
Long story short, Ma'kiwis is a adventure game for mobile devices where you play as a shaman leading mini tribe people to safety.
This week sprint was about adding few items to the game and make the firsts cutscenes into the game, so we could start testing the game workflow with them inside the game. I was assigned to work on the cutscenes that gives the player an introduction to the game's plot and gameplay, basically the tutorial.
Story boards of the cut-scenes in Level 1
Maya animations + Unity's Animation System was too messy
After trying for a few days I felt the system we were using previously was a bit limiting and it wasn't letting me do basic things like blend the cameras or time events like camera shakes or starting and stopping Particle Systems. We even had to animate the character interaction together as a single GameObject using Unity's Animation system.
The thing I disliked the most of this previous system was the inability to blend between pre-fixed cameras.
This meant that we couldn't go back to the Main Gameplay Camera after a cutscene which resulted in cuts every single time; either that or fades to black. This felt too repetitive in my opinion since there are a lot of other camera cuts when simple events occur ingame, like activating a switch or picking up a collectible. I wanted something a bit more dynamic that attracted the player's attention, hence the blending between cameras was really needed.
After talking with the rest of the team, we decided to upgrade the current Unity version we were using e (from 5.6.4f1 to 2018.2.0f2) so we could use the Timeline (Timeline was included in Unity 2017.1) + Cinemachine.
Example of the Popping problem when using Animator
Cinemachine is a free Asset developed by Unity that brings a lot more of freedom and a more cinematic look to the Unity camera system, allowing you to control the Field of View, Depth of Field, Camera Collision and the so needed blending between cameras, among other great features.
Cinemachine + Timeline is a very powerful combination!
This is done because we actually blend between the Main Camera to the position of the Cutscene Camera, which caused a stutter because of the Following script wanting to go back to the gameplay position. Basically there were two parts telling the camera what to do: the Following script was telling it to stay aiming for the player and the Gameplay one (which we were blending to) was telling it to follow the spline/bezier until the position was the Cut-scene cam position.
This way we always have the position of where the camera should be during gameplay to blend back after a Cutscene.
Another possible solution could be to have a Master Camera and using the Gameplay Camera as a Cutscene one, so the Master could blend between them without stutter, but that would've meant that the whole camera system would have to be changed and we couldn't afford that.
Hope my struggle with the cutscenes can help someone. :)
Alejandro Bielsa is a junior 3D artist working at Polygonal Mind's in-house team.
Hello there, for those who doesn't know me my name is A. Daniel García Aranda and I'm one of the co-founders of Polygonal Mind, and now it's sole owner.
This is meant to be the first blog post and I'm going to talk you about the company journey from day -1 to the present day, what drove us to start a business and why we made the decisions we made on the go.
Last year my friend Jesus and I were working at Imascono, a Spanish augmented reality start-up, we both liked our job there, but we always loved video games and we both felt AR wasn't that fulfilling anymore.
On our daily work breaks and the time we spent playing video games together we spoke a lot about making our own project, until it finally happened, we started our own project on our free time.
Since the start we were really passionate about it, we couldn't wait to finish our morning jobs and come back home to work on our personal project, this obliviously had consequences, we started to show less and less interest on our AR jobs and everyone at the office saw it except us.
Some weeks later Jesus contract with Imascono ended.
And a couple of weeks later in March of 2015 I was fired.
The waking up months.
Both of us have programming background and both of us don't really enjoy to code that much, but since Jesus had more experience than I did he took the coder role and I took the artistic side of the project.
Meanwhile we did our research about how to create a company outside Spain and I must say, if there is something we did right from the beginning, this is it. Maybe in the future I write about why we made this choice.
On August we procrastinated a lot and from this month forward Just Leaping was starting to become a car stuck in the sand, the problem was that we were in the middle of the desert and we didn't even noticed.
Finally on October of 2015 Polygonal Mind was founded.
2016 started and we did an annual review about the business and the game and what once it was all positivism and confidence, it was all gone, I think we both knew at that point that the project was not going to make it to the stores anytime soon, but after some talking we promised ourselves that we were going to release the game and that we did learn A LOT through 2015 and the final price was worth it.
I was quite motivated myself since I was moving to London with my girlfriend for at least 3 months on March 17th, I wanted assist some events to do networking and making things actually happen, so I had the idea of looking for a publisher to help us finish our game and make some bucks. I spoke with Jesus about it, but he didn't liked the idea that much, he reminded me that we agreed on doing it all by ourselves from the beginning. Also bringing in a publisher would add more months of work to Just Leaping and that was something we didn't liked at all. This conversation was a break point for us, on one side I wanted to make profit with our almost a year of development, on the other side, Jesus didn't care about that stuff anymore, he was really tired about the project, he just wanted it to finish, codding was never his favourite skill and he was getting really sick of it. This made him re-think about everything we were actually doing and if that was his actual goal after 11 months of development.
A couple of weeks later Jesus told me he wanted out, out of the project and out of the company we both created. After talking about it I understood that coding was really stressing him and that his life goals changed over the course of the year, although we both wanted to finish the project we started, we could not do it together as a team, we both agreed that we needed to go separated ways.
He was burned out and I cannot blame him since I was feeling the same thing.
After reading this books in a couple of days I decided to start take action, so I bought a cheaper Graphic card on amazon and started to make 3D the next day. I understood that I can't do video games on my own and I don't want to work for another company but mine. That's why Polygonal Mind needed to pivot in another direction that could fit my skills, a 3D design studio.
Why did I took this decision? Well, one of the reasons of starting Polygonal Mind was to make the products that we could be proud of saying that are ours, we deeply believed that we could do better on our own than for working for others ad by doing so we were going to learn way more. Somehow we both lost our path to this great idea we had the day -1, and it took me months to see the way back to it, now my real goal is to keep this ideals alive every day, no matter what .
After this huge step back to my mental canvas, I started to make more changes on the company, I made this website and started to network with more confidence .
Three weeks later I started to have some clients and started to work on Morphite with Crescent Moon Games and We're Five Games.
So what's going on now?
Now, I'm cleaning up minor projects to be able to start sharing more about my artwork and my path to become the best 3D company around, it's gonna be one hell of a ride :)
Jesus now is improving his 3D modelling skills, and he never typed a line of code since May.
As for Just leaping, the game development is stopped, we haven't work anymore on it, but as for the characters I have a couple of cool projects on my mind, but I still can't say anything about it.
Also I wanna thank you for reading my history, it took me forever to write this post, due it's sentimental value.
I know it's a long read and I had to cut some of the content to not make it a super large post, so if there is anything you would like to ask, feel free to do it and I will be glad to answer :)