Have a good look at a picture that you really like of your favourite 3D videogame character. Look at it very closely. Chances are, the reason why you like that image so much is because of its meticulous attention to details.
I went through a similar exercise by comparing pictures of popular 3D characters with my own, trying to identify something, anything that I could do to improve my own 3D models. It is a controversial approach, especially since pretty much everyone suggested, at the time, to stop wasting my time and purchase a professional 3D model from an asset store. Finally, after a few weeks of research (who doesn’t love to surf the Internet for hours), I identified three “opportunities for improvement”.
The obvious one was the low polygon count compared with the industry standard. To elaborate, my character models have less than 3,000 polygons, which is a rather low count, even for the previous generation of game consoles. Here is where my lack of artistic skills has a considerable impact: Using information theory, in layman’s terms, unless I learn how to digitally “sculpt” better (or hire a real artist to do it for me), it doesn’t matter how many polygons I add to my models, the overall “shape” will remain the same.
The second opportunity for improvement was the unsound color palette I use, which is a topic worthy of an article of its own.
However, the most interesting challenge was, by far, hair modeling. Every single image of a 3D character I saw had a careful design, paying attention to details all the way to the hair strand level. Some of the pictures were so impressive that it was rather daunting to think about trying to implement something that even pretended to get one step closer to that quality level.
I was about to discard the whole idea when I thought that perhaps it would be fun to at least try. I mean, I don’t have a release date anyway, so I might just as well explore the challenge and see how far I was willing to go before calling it quits.
It took me weeks to visualize it. The design change process was so convoluted that, at one point, I even didn’t know what I wanted. For now, it is best to show both the old and the new 3D models side by side (zoom in included) and then explain how I got there:
- Left side: My original design tried to imitate professional 2D animation studios. That approach simplified hair as a solid “blob”, only defined by a color gradient and sporadic hair locks, if any. Back in the day, that was a popular approach since the available technology can only process so much data before stuttering at a 60 frames per second drawing speed.
- Right side: The new design (which is what I’m going to describe throughout the rest of this article) implements hair strands, as well as a detailed “texture” for highlight and lowlight contrast (sort off, anyway). It doesn’t really match the quality of professional tools, but it is still a step towards the right direction.
Hair modeling is new to me, for I am not a graphic designer. So, of course, the first step I took was to search the Internet for answers, particularly at the YouTube website.
This did not go well. On one side, people would recommend modeling a single lock of hair as if it were a cylinder and then clone it multiple times, altering each copy in a unique way until ending with a messy mesh, although quite artistic, that still looked like a static blob without the right texture. On the other side, people would recommend implementing a particle system and shape it to the volume of the hairdo. This last approach looked quite promising but, at the end, I had to discard it.
This is why neither of these techniques worked: When it comes to videogames, it is rather important to keep the drawing speed at a 60 frames per second. That does not give us a lot of microseconds to work with. As a developer, if we are creating a fighting game where only two characters are ever present on the screen then yes, we can go overboard and increase the polygon count to the roof or implement particle systems that react with the surrounding environment, or even both. However, if we are creating a game in which there can be more than a dozen characters on the screen at any given time then we need to be picky and discard those costly features that could drop our drawing speed below the 60 fps mark.
The Drawing Board
After a failed research phase, I decided to implement my very own approach. To do so, I established a few assumptions to guide me:
- Ultra high definition (4K resolution) is definitely out of scope. I would need a complete game overhaul and hire a professional graphic artist to re-do absolutely all textures to even consider aiming to this level of detail.
- There won’t be any crucial close-ups to any character. After all, it’s a fighting game, not an RPG.
- 3D models will focus on overall “shape” whereas textures will focus on “details”.
- I have already an in-house algorithm that deals with hair movement. The starting assumption is that this same algorithm should apply without major changes.
- There can be more than a dozen characters on the screen at any given time, thus optimization is paramount.
After writing all these guidelines on the drawing board, I went ahead and chose my development strategy: I decided to implement multiple materials using transparency on the outer layers.
I know, the experienced developer may have concluded at this point that, of all available options, I had chosen the worst one (again). Still, before you discard all this as a “pipe dream” and change the page, allow me to earn the benefit of the doubt by showing how the end result looks like:
The Nature of Hair
Before modifying my models, I focused on creating the right texture. After all, according to my design guidelines, this is where I decided to focus all efforts for overall image details.
Here is where I stumbled more often: I made several texture files using commercial drawing tools and all of them looked kind of weird when loaded into the character’s model. As I got closer to my goal on every attempt, I realized three key factors regarding the nature of hair:
- Hair is built by strands. Two strands together have the same tone, but they have different brightness. Otherwise, I’d finish with the same “blob” I had before.
- Hair is not built by parallel strands. In fact, while most of them go towards one direction, a good amount of them go at an angle. These last ones are almost imperceptible and yet, if they are not present, the character’s hair does not look natural. In fact, it looks rather weird instead.
- Hair strands are grouped by tresses (or locks). As we go further down, the observer should be able to “see through” in between tresses. Very important: the observer should NOT be able to see through tresses at the head level. That’s even weirder.
- Oddly, hair tresses don’t move together. Overall, they do, however a slight movement difference must occur. Otherwise, the overall implementation looks like a cheap parlor trick.
- For starters, everybody goes great lengths to do exactly the opposite, which is drawing as much as we can in as few draw calls per frame as possible (which is the main concept behind “batch drawing”). Thus, here I am, drawing the same model in multiple draw calls.
- Also, everybody tries to pack as much as they can in a single texture and use the concept of “texture atlas”. That will send a single texture buffer to the GPU, effectively reducing the number of draw calls per frame. Thus, here I am, having multiple materials for the same model (typically, it is one texture per material). This is way worse than it sounds: A texture file for hair is huge, especially if we want to deliver a detailed image at the strand level. Even worse, I had three hair textures per model – I could not have the same hair texture for all three layers because all transparent patches would be on top of each other, thus making the multilayer approach completely pointless.
Under the Hood
Those developers that have work with transparent textures before are deeply aware of one undeniable fact: it is painful (to put it mildly). It’s an excruciating nightmare to sort drawing calls to ensure that everything that’s behind the transparent layer is drawn BEFORE the actual texture, because doing otherwise will result in empty patches. To illustrate this point, following the idea of applying transparent pixels in-between tresses, the resulting image is a floating head with some detached legs moving on the floor and nothing else in between.
This is where knowledge about what goes on “under the hood” comes handy: If the hair is made of a different material than the rest of the model, then the hair will be drawn AFTER the rest of the model in a separate draw call. This perfectly addressed the transparency order problem. I was so happy with the result that I added two more materials just for hair, pretty much clones of the original, each one slightly bigger than the inner one, giving the overall effect of “volume”.
For the experts in the topic, this approach clashes with what we call “good practices” on so many levels:
In perspective, it does look like a flawed design. However, to address these performance concerns, what I did was, instead of using an actual texture file, I created a custom pixel shader that would replicate the hair strand pattern. I mean, in my experience, texture sampling is a rather expensive operation for a pixel shader. I figured, instead of sampling a texture, I could reduce the performance cost if I simplified the algorithm and reduced it to a switch case (yet another big no-no according to some experts). I’d still have multiple draw calls, not to mention a pixel shader with internal branches, but at least I wouldn’t have to worry about sending AND sampling big, multiple texture buffers in-between draw calls (not to mention the need for “MIP Mapping” to get “close-ups” and “far views” right).
It took me a while to get the right pixel shader algorithm. Still, after all this work, I was not quite happy with the final result. I mean, the character’s hair had the right look, but it still moved like a solid blob. No one would like it, even if we argue that it was a case of too much hairspray.
To address this issue, I looked at fine tuning my hair animation algorithm. Here is where things got even weirder: A polygon model can have up to 59 bones for animation purposes. Now, the hair itself in my 3D models uses 8 of those bones already. When it comes to animation, it suffices to say that I’m already out of bones… and computer cycles.
However, considering that the hair mesh was built using three different layers with three different materials, I figured I could fiddle with the vertex shader so the outer layers would be more affected by the bone animation than the inner layers. In that way, the outer layer’s animation would be more “dramatic” than the inner layers.
This can be done by messing up with what we call “Skin Weights”. A model “Skin Weight” is like a parameter that tells the animation engine how much a group of vertices are affected by the movement of a given bone. That said, I applied this concept to the three layer/material hair design by messing up with the model skin weights so the outer layer is more affected by the bone’s movement than the inner layers.
This tiny difference in movement, along with the transparency feature, gave the overall effect of “fluffiness”. It’s far from perfect, but it provided the technical framework for a much improved hair animation. Here is a story-board of a “dash” and “kick” animation to illustrate the effect:
After I implemented all the described features, I was happy with the overall look. If this were a third-person shooter then additional fine tune would be required (it’s just not fluffy enough, not yet anyway). However, for a side-scrolling brawler, or even a platformer, the results are pretty good. Not quite AAA quality (I need a miracle for that), but, for an “indie” game, it rather “fits the bill”: the colors have strong tones, there are some unexpected yet welcomed highlights and lowlights, I can set the algorithm for a wide range of hair colors and hairdos, and the overall animation looks and feels way more natural than before. Every now and then it does feel like the game is made of artistic 2D sprites if it hadn’t been by the 60 fps animations.
So far, I haven’t noticed a real “dent” in the game overall performance, mostly because the count of pixels affected is way too small compared with the big picture (pun intended). Also, the Xbox One is one powerful console. Still, in the future, when (and if) I add more game elements, I may be forced to review and choose what to keep and what to take out. However, as of this blog, I’m very, very happy with the final result.
Hirurg. “Female long brown hair stick photo”. iStock by Getty Images. October 31, 2015. https://www.istockphoto.com/ca/photo/female-long-brown-hair-gm494993286-27574237