Art Spotlight: Here, at Last

Back to overview

Hi, my name is Yann and I’m a French CG artist, long time Sketchfab user and avid video game fan. As such, my main focus is on real time interactive experiences. I really enjoy giving life to concept art in interesting ways to tell a story and solving the technical hurdles I might encounter on the way.

Since the outset of 3D graphics, there has been an relentless quest to make 3D look and feel like 2D animation, and for good reasons. Computer graphics, by design, are generated, they adhere to a set of rules, are predictable and therefore look…3D. Compared to hand drawn animation, which is only bound by the will of the hand drawing it, you can quickly understand why mixing the two can be quite challenging, especially when you consider that real time 3D adds a bunch of constraints on top of it all.

Recently, however, the game studio Arc System Works, with their latest games Guilty Gear Xrd and Dragon Ball Fighter Z, managed to create real-time animated 3D characters so similar to hand drawn animation that they could have fooled anyone.

Some examples of Arc System Works games visual

I had to try it myself.

At this point, I think it is important to clarify what I will mostly talk about in this article. As stated, my goal with this project was to take one of my own design and recreate it using a process meant to be used with a custom real-time cell-shader. However, because Sketchfab uses its own built-in shaders, I had to bake the end result in a giant texture map, defeating the whole purpose. I will therefore mainly describe how I created the in-engine version of this model because that is what’s special about it.

Maya viewport preview with real time lightning

The theory

Arc System Works gave a talk at GDC 2015, going deep into the creation process used in Guilty Gear Xrd. While I urge you to watch the full talk for yourself, I will summarize it here.

  • Make it look 2D
    • Clean 2 tones shading
    • Clean line work, from afar and close-up
  • Kill everything 3D
    • No pixelated textures or shadows
    • No normal maps
    • No visible tessellation
  • Have intent, each part needs to be controlled by the artist, not generated
    • Which part of the model should be shaded
    • How and where the line should be

Basically, in order to generate 2D looking CG, their artists needed to put a lot of rules in place without using textures, the most common and flexible tool we have.

So they relied on the next best thing: vertex data. Because vertex data are linearly interpolated from one value to the next there is no risk of lacking definition. The downsides are the need for a lot of vertices and the difficulty of editing said vertex data.

Using textures versus Using Vertex Color data

The Actual Process

Design

For this project I decided to use a long-running character of mine, called Chay, he’s a free spirited guy wearing high tech prosthesis on his lower body.

Even though his design was already there from previous incarnations I had to figure which modifications I should make to better suit the desired art style. Because cell-shading only has 2 states of lighting it lacks the light nuances required to read subtle shapes and volume. With that in mind I reworked the design to included large features, sharp edges, well defined muscles, cloth wrinkles and tech details.

Modeling

In most cases I start with a drawing. However in this instance, because I already knew this particular design so well and had a pretty clear idea of where I was going, I jumped straight into Zbrush and started sculpting. Zbrush’s freeing approach to 3D modeling allows for quick iteration, making it a great tool at this stage.

Once happy with the result, I moved the high definition sculpt into Maya for the painful process of retopology to begin. While doing classic retopology is not the end of the world, this one was all but classic. Because of this specific process it was crucial to carefully place vertices where information would later be needed, to have edges where inlines would need to be drawn (more on that later), to think about normal interpolation, and skinning deformations, all while, of course, staying within a polycount budget of around 20k but still having smooth curvatures even in close-ups.

Trying this for the first time resulted in a lot of back and forth, especially for the head. Turns out planning for future steps you know very little about is a recipe for disaster.

Editing Vertex Data for the Shading

The final mesh in hand, it was time for the lighting part, the hardest part.

The math behind it is actually really simple and I’ll refer you to the corresponding GDC slide:

So the idea is to store a Threshold value from 0 to 1 into one color channel of each vertex and use this value as a controller to adjust if this vertex is more likely to be lit or shaded, with 0 as always shaded and 1 as always lit. It is then compared to the dot product between the Normal vector and the Light vector.

In order to start previewing the result I had to start working on the actual custom shader that would be used in the end. Fortunately, Maya introduced a new node based shader editor a few years ago, ShaderFX. Unfortunately, like everything in Maya, it’s a very complex but complete tool that doesn’t hold your hand but lets you tweak everything if you’re brave enough. The basic mode is great to modify standard shaders but in order to have custom lightning I had to delve deep into the advanced mode to finally create the simple shader I needed.

With the preview finally working it was time to tweak the threshold. Because I first worked on the body parts, it started as a rather simple process. Vertex normals didn’t need to be edited on those parts and Maya has a vertex color painting tool that allows you to paint specific channels, set values and more, which made the whole process pretty easy… With a starting default value of 0.5, I then slowly darken the areas I deemed should be shaded: under the arms, in the cavity between muscles, between the legs, all while moving the light around to keep consistency. In a sense it looked a lot like an AO map; as such, an alternative method could have been to bake a quick AO map and apply it to the vertex color as a starting base.

When I started working on the face however, it got really messy. The problem with lightning a face in this art style is that it doesn’t make any sense. None. No care is given about the actual shape, volume, cavity, laws of physic, or anything really. It just needs to fit the artist vision, which is perfect because that’s what this workflow is designed to allow. But ultimately a set of rules defining this vision was still needed and to find those rules meant experimenting.

Experiments requires back and forth, so at this point I realized that Maya’s default tools for editing vertex normal and even color were severely lacking. I couldn’t copy and paste normal values between vertices, or average normals or other features that would make the whole process more efficient..

Thankfully, because I hate doing repetitive meaningless tasks which often arise in this profession, I learned some handy scripting skills I could use to build my own tools in python. So I spent some time making tools for vertex color and normal manipulation.

The solution to light the face was to define zones that would always be equally lit, these zones needed to be separated by edges, which forced me to get back to my original modeling. You then apply the same normal to the whole zone and if you need additional control on top of that you tweak the threshold value.

Additionally I used a grayscale texture with flat values for the threshold to complement the vertex color in areas where the linear interpolation between vertices wouldn’t work, for example on the legs along hard edges. In these cases the use of a texture isn’t a problem because each affected zone is separated in the UVs.

Same lightning but with different Vertex Normal and then a threshold value to control the illumination.

UVs, Colors and Inlines

Once the system knows what to light or shade, defining colors is child’s play. I used 2 sets of textures with flat color areas where I put the corresponding UVs. The first texture is the lit base color, while the second is meant to tint the base color (by multiplying both) to get the shaded result.

The inlines, on the other hand are a whole other story. As we previously stated, we cannot hand paint inlines. It would look blurry or pixelated, and yet they are an essential part of having a 2D look. On the final result however they stay clean and sharp regardless of how closely you look. The trick, as surprising as it may seem, is to use a texture! At its core a texture is just a bunch of sharp square (pixels), if you only use straight horizontal or vertical lines there’s no pixel to be seen, and if you don’t filter the texture in the engine it’s not blurred out.

With that in mind, I had to find a way to unwrap the UVs of the whole character in a way where every edge on which I wanted an inline would be straight. Basically a lot of squared UV patches. And because at this point I still had no idea how many inlines I wanted, I just split everything into its own patch.

To accomplish this, I used a plug-in for Maya called Nightshade UV editor Pro. It’s free, it’s great, it has straightening features and without it I might still be unwrapping this model to this very day.

This task done, I then used Maya to bake a flat white color on a black background to generate the straight black line around all my UV patches. All that was left to do was to look at each edge where I wanted an inline and move the UV’s into the black part of the texture; the farther in, the thicker the line. At first it wasn’t clear how many inlines I wanted, especially on the face area where it would go against the flatter look of the lightning. I eventually settled on a few lines for the face and much thicker ones on the metallic parts to emphasize the hardness of the material.

Outlines

Time and time again with this project something which sounds very simple at first turns into a real headache during its implementation. Outlines are no different.

In order to do outlines, the idea was to use the tried and true inverse hull method, have the shader render a second version of the meshes, offset its vertices along the normal by a value stored into a vertex color channel to have variable outline thickness depending on the area (face and hair mostly) and finally multiply the thickness with the camera distance to have a good looking outline at all ranges.

What could go wrong? Well, first, I edited the normals and used hard edges which split the vertex so I could not offset the vertices along the normal without having splits and inconsistencies. Second, I couldn’t find a way to render in a secondary mesh at the shader level.

After a few hours of scratching my head I just gave up, duplicated the whole mesh, smoothed all normals and used this as my outline mesh.

Grayscale visualisation of the blue channel controlling the width of the outline.
The darker it is, the thinner the outline.

Rigging and Skinning

For the rigging part I used an old script of mine which creates a whole biped rig with a single click. It has its flaws and needs to be updated, but for this project it was enough.

I also used blendshapes to finely tweak the final expression of the face. Again, a bone based rig of the face would have been better suited for a real-time asset.

Final Export for Sketchfab

Even though this whole process was meant to be used with a custom shader in a game engine like Unity, I knew from the get go it had to end up on Sketchfab. After all, what is the point in making something if you cannot share it with others? To me, Sketchfab is the almost perfect platform for that – “almost” because sadly it doesn’t support custom shaders, which means this long and tedious process I just described would somewhat be for naught.

The alternative I worked on was to create a secondary UV set, more classic this time, starting from the base of the first one to gain some time, I did an automatic unwrap and layout. I then baked the whole shading result, minus the inlines, into a giant 8k texture map following this secondary UV. I cleaned it a bit on photoshop, especially along the seams.

In Sketchfab, I set the render to classic and shaderless, the main 8k texture as diffuse on UV1 and the inline texture as a lightmap on UV0. Importantly both textures’ filtering is set to “nearest” which means no filtering in order to keep the sharp lines. You can inspect the lightmap channel of the model on Sketchfab to display only the lines. I kept the original diffuse texture for the eyes which had a much better definition and I also used a small additional emissive map to make the green glowing effect.

The outline is a single sided duplicated mesh, on which I manually offset the vertex. Again because I used a vertex color channel to control the offset distance I couldn’t really do it by hand, so I wrote a new script to do it.

To round up the presentation I wanted to make a small scene with some background elements. At first my mind was set on a very flashy action scene, with energy particles flowing around, shockwaves and crackling ground. But the more I thought about it, the more it occured to me that I wasn’t in a mood for bombastic action. I was tired, this project was done, at the end of his journey, Chay was standing at the top of the hill, the wind blowing around him, he breathed a sigh of relief and thought:

“Here, at last.”

Artstation / Google+ / Website

 

About the author

Yann Lacour

3D Artist


Leave a Reply

Your email address will not be published. Required fields are marked *

Related articles