Dev Diary #9: Hair Modeling using Transparent Textures and Multiple Materials


Have a good look at a picture that you really like of your favourite 3D videogame character. Look at it very closely. Chances are, the reason why you like that image so much is because of its meticulous attention to details.

I went through a similar exercise by comparing pictures of popular 3D characters with my own, trying to identify something, anything that I could do to improve my own 3D models. It is a controversial approach, especially since pretty much everyone suggested, at the time, to stop wasting my time and purchase a professional 3D model from an asset store. Finally, after a few weeks of research (who doesn’t love to surf the Internet for hours), I identified three “opportunities for improvement”.

The obvious one was the low polygon count compared with the industry standard. To elaborate, my character models have less than 3,000 polygons, which is a rather low count, even for the previous generation of game consoles. Here is where my lack of artistic skills has a considerable impact: Using information theory, in layman’s terms, unless I learn how to digitally “sculpt” better (or hire a real artist to do it for me), it doesn’t matter how many polygons I add to my models, the overall “shape” will remain the same.

The second opportunity for improvement was the unsound color palette I use, which is a topic worthy of an article of its own.

However, the most interesting challenge was, by far, hair modeling. Every single image of a 3D character I saw had a careful design, paying attention to details all the way to the hair strand level. Some of the pictures were so impressive that it was rather daunting to think about trying to implement something that even pretended to get one step closer to that quality level.

I was about to discard the whole idea when I thought that perhaps it would be fun to at least try. I mean, I don’t have a release date anyway, so I might just as well explore the challenge and see how far I was willing to go before calling it quits.

The Challenge

It took me weeks to visualize it. The design change process was so convoluted that, at one point, I even didn’t know what I wanted. For now, it is best to show both the old and the new 3D models side by side (zoom in included) and then explain how I got there:

Original Vs New Models

  • Left side: My original design tried to imitate professional 2D animation studios. That approach simplified hair as a solid “blob”, only defined by a color gradient and sporadic hair locks, if any. Back in the day, that was a popular approach since the available technology can only process so much data before stuttering at a 60 frames per second drawing speed.
  • Right side: The new design (which is what I’m going to describe throughout the rest of this article) implements hair strands, as well as a detailed “texture” for highlight and lowlight contrast (sort off, anyway). It doesn’t really match the quality of professional tools, but it is still a step towards the right direction.

The Research

Hair modeling is new to me, for I am not a graphic designer. So, of course, the first step I took was to search the Internet for answers, particularly at the YouTube website.

This did not go well. On one side, people would recommend modeling a single lock of hair as if it were a cylinder and then clone it multiple times, altering each copy in a unique way until ending with a messy mesh, although quite artistic, that still looked like a static blob without the right texture. On the other side, people would recommend implementing a particle system and shape it to the volume of the hairdo. This last approach looked quite promising but, at the end, I had to discard it.

This is why neither of these techniques worked: When it comes to videogames, it is rather important to keep the drawing speed at a 60 frames per second. That does not give us a lot of microseconds to work with. As a developer, if we are creating a fighting game where only two characters are ever present on the screen then yes, we can go overboard and increase the polygon count to the roof or implement particle systems that react with the surrounding environment, or even both. However, if we are creating a game in which there can be more than a dozen characters on the screen at any given time then we need to be picky and discard those costly features that could drop our drawing speed below the 60 fps mark.

The Drawing Board

After a failed research phase, I decided to implement my very own approach. To do so, I established a few assumptions to guide me:

  • Ultra high definition (4K resolution) is definitely out of scope. I would need a complete game overhaul and hire a professional graphic artist to re-do absolutely all textures to even consider aiming to this level of detail.
  • There won’t be any crucial close-ups to any character. After all, it’s a fighting game, not an RPG.
  • 3D models will focus on overall “shape” whereas textures will focus on “details”.
  • I have already an in-house algorithm that deals with hair movement. The starting assumption is that this same algorithm should apply without major changes.
  • There can be more than a dozen characters on the screen at any given time, thus optimization is paramount.

After writing all these guidelines on the drawing board, I went ahead and chose my development strategy: I decided to implement multiple materials using transparency on the outer layers.

I know, the experienced developer may have concluded at this point that, of all available options, I had chosen the worst one (again). Still, before you discard all this as a “pipe dream” and change the page, allow me to earn the benefit of the doubt by showing how the end result looks like:

The Nature of Hair

Before modifying my models, I focused on creating the right texture. After all, according to my design guidelines, this is where I decided to focus all efforts for overall image details.

Here is where I stumbled more often: I made several texture files using commercial drawing tools and all of them looked kind of weird when loaded into the character’s model. As I got closer to my goal on every attempt, I realized three key factors regarding the nature of hair:

  • Hair is built by strands. Two strands together have the same tone, but they have different brightness. Otherwise, I’d finish with the same “blob” I had before.
  • Hair is not built by parallel strands. In fact, while most of them go towards one direction, a good amount of them go at an angle. These last ones are almost imperceptible and yet, if they are not present, the character’s hair does not look natural. In fact, it looks rather weird instead.
  • Hair strands are grouped by tresses (or locks). As we go further down, the observer should be able to “see through” in between tresses. Very important: the observer should NOT be able to see through tresses at the head level. That’s even weirder.
  • Oddly, hair tresses don’t move together. Overall, they do, however a slight movement difference must occur. Otherwise, the overall implementation looks like a cheap parlor trick.
  • Female long brown hair


    Under the Hood


    Those developers that have work with transparent textures before are deeply aware of one undeniable fact: it is painful (to put it mildly). It’s an excruciating nightmare to sort drawing calls to ensure that everything that’s behind the transparent layer is drawn BEFORE the actual texture, because doing otherwise will result in empty patches. To illustrate this point, following the idea of applying transparent pixels in-between tresses, the resulting image is a floating head with some detached legs moving on the floor and nothing else in between.

    This is where knowledge about what goes on “under the hood” comes handy: If the hair is made of a different material than the rest of the model, then the hair will be drawn AFTER the rest of the model in a separate draw call. This perfectly addressed the transparency order problem. I was so happy with the result that I added two more materials just for hair, pretty much clones of the original, each one slightly bigger than the inner one, giving the overall effect of “volume”.

    For the experts in the topic, this approach clashes with what we call “good practices” on so many levels:

    • For starters, everybody goes great lengths to do exactly the opposite, which is drawing as much as we can in as few draw calls per frame as possible (which is the main concept behind “batch drawing”). Thus, here I am, drawing the same model in multiple draw calls.
    • Also, everybody tries to pack as much as they can in a single texture and use the concept of “texture atlas”. That will send a single texture buffer to the GPU, effectively reducing the number of draw calls per frame. Thus, here I am, having multiple materials for the same model (typically, it is one texture per material). This is way worse than it sounds: A texture file for hair is huge, especially if we want to deliver a detailed image at the strand level. Even worse, I had three hair textures per model – I could not have the same hair texture for all three layers because all transparent patches would be on top of each other, thus making the multilayer approach completely pointless.

    In perspective, it does look like a flawed design. However, to address these performance concerns, what I did was, instead of using an actual texture file, I created a custom pixel shader that would replicate the hair strand pattern. I mean, in my experience, texture sampling is a rather expensive operation for a pixel shader. I figured, instead of sampling a texture, I could reduce the performance cost if I simplified the algorithm and reduced it to a switch case (yet another big no-no according to some experts). I’d still have multiple draw calls, not to mention a pixel shader with internal branches, but at least I wouldn’t have to worry about sending AND sampling big, multiple texture buffers in-between draw calls (not to mention the need for “MIP Mapping” to get “close-ups” and “far views” right).

    It took me a while to get the right pixel shader algorithm. Still, after all this work, I was not quite happy with the final result. I mean, the character’s hair had the right look, but it still moved like a solid blob. No one would like it, even if we argue that it was a case of too much hairspray.

    To address this issue, I looked at fine tuning my hair animation algorithm. Here is where things got even weirder: A polygon model can have up to 59 bones for animation purposes. Now, the hair itself in my 3D models uses 8 of those bones already. When it comes to animation, it suffices to say that I’m already out of bones… and computer cycles.

    However, considering that the hair mesh was built using three different layers with three different materials, I figured I could fiddle with the vertex shader so the outer layers would be more affected by the bone animation than the inner layers. In that way, the outer layer’s animation would be more “dramatic” than the inner layers.

    This can be done by messing up with what we call “Skin Weights”. A model “Skin Weight” is like a parameter that tells the animation engine how much a group of vertices are affected by the movement of a given bone. That said, I applied this concept to the three layer/material hair design by messing up with the model skin weights so the outer layer is more affected by the bone’s movement than the inner layers.

    This tiny difference in movement, along with the transparency feature, gave the overall effect of “fluffiness”. It’s far from perfect, but it provided the technical framework for a much improved hair animation. Here is a story-board of a “dash” and “kick” animation to illustrate the effect:


    After I implemented all the described features, I was happy with the overall look. If this were a third-person shooter then additional fine tune would be required (it’s just not fluffy enough, not yet anyway). However, for a side-scrolling brawler, or even a platformer, the results are pretty good. Not quite AAA quality (I need a miracle for that), but, for an “indie” game, it rather “fits the bill”: the colors have strong tones, there are some unexpected yet welcomed highlights and lowlights, I can set the algorithm for a wide range of hair colors and hairdos, and the overall animation looks and feels way more natural than before. Every now and then it does feel like the game is made of artistic 2D sprites if it hadn’t been by the 60 fps animations.

    So far, I haven’t noticed a real “dent” in the game overall performance, mostly because the count of pixels affected is way too small compared with the big picture (pun intended). Also, the Xbox One is one powerful console. Still, in the future, when (and if) I add more game elements, I may be forced to review and choose what to keep and what to take out. However, as of this blog, I’m very, very happy with the final result.

    Works Cited

    Purchased Image:
    Hirurg. “Female long brown hair stick photo”. iStock by Getty Images. October 31, 2015.

Dev Diary Entry #8: Submission!

Time is up and the day has come to submit our entry. It is so difficult to conceive that 6 months have passed!

I have to confess that my game “Antimatter Instance” is nowhere near finished. I mean, I was able to complete a functional prototype – a “proof of concept”, if I may call it like that. I submitted that version and I’m confident it will comply with the minimum requirements of the Dream Build Play 2017 challenge. Still, I cannot help but wish I’ve had more time to work on it.

It is a little bit of a downer to admit that the game is incomplete, having stated at the beginning of the contest that I had reduced the scope of the project in order to ensure I was going to finish it on time. But that’s ok: the contest may have ended but I’m going to continue working on it anyway. A videogame is, after all, like an artistic master piece and, as such, it is done when it is done. Simply put, I’m very motivated to continue working on it and I really want to release it to the general public, now that I have access to the Creator’s program.

There is no point on reflecting on what worked and what didn’t work because I haven’t finished, yet. Although, I have to point out that the contest timing was rather unforgiving: 6 months is just not enough to create a game good enough to be released at an international on-line store (last thing I want is to be famous for releasing half-baked games). On top of it, the due date got in the way of Christmas and New Year’s celebration, which was a sour conflict for family members. On the other hand, I kind of understand, from a business perspective, the rush of being able to publish something during the first major holiday season after the release of the Creators Program.

Still, I’m happy and proud of what I was able to deliver. I wanted to show what a really small Indie team could create using Microsoft’s DirectX technology. Despite the fact that the Development Diary contest is also over, I’m also going to continue posting blogs on this site. Who knows, maybe my rambling could encourage other Indies who, like me, are not afraid on getting their hands dirty and create their own game engine.

Dev Diary Entry #7: Xbox Live Authentication for UWP Projects Using C++


Last Friday, I submitted my entry to the Windows Store and I’m currently biting my nails, pulling my hair (what is left of it anyway) and kicking the wall where the kitchen clock is, waiting for my game to be certified and on its way to the publishing stage. The truth is that I had already submitted it last Wednesday and it got certified in less than 16 hours. However, I found a rather odd bug with the Xbox Live authentication process so I decided to cancel the publication, implement a hot fix for the problem and re-submit. The stress of this gamble is such that, despite the fact that I’ve been sleep deprived for the last two months, I haven’t been able to rest since then.

In retrospective, the Xbox Live integration was a rather complex task to complete (heaviest sentence of the entire blog considering that it’s coming from a guy that implemented a home-made game engine to host his very own 3D animation algorithm). I mean, there is plenty of documentation, and the samples at the github repository site are very good. However, it took me a while to match the documentation to the code to finally come up with the integration approach.

To elaborate, the requisite to publish a game for the Xbox One through the Creators Program is to integrate Xbox Live sign-in and display the user identity (gamertag), so the player can validate that he/she is using the right profile – See point #4 in the following on-line guide:

Now, the algorithm to do so is described, in details, in the following document:

The big problem with this last article is that the code shown has been overly simplified to the point that it doesn’t really match the information described on the main paragraphs. I mean, the article is really, really good, but the source code shown is quite incomplete. So, I decided to document the source code I used to integrate to Xbox Live framework in order to complement this last MSDN article and help fellow indie developers enlisted in the Creator’s program.

Strategy Overview

To be clear, the objective is to implement Xbox Live Authentication for a UWP game using C++. Whatever happens next (leaderboards, achievements, multiplayer, etc) is outside the scope of the source code I’m documenting. I already have one game at the store using this code and it has been out for almost two weeks now so it is safe to assume that it has passed all certification tests performed by the Windows Store personnel.

Then again, I’m an awful programmer, so please take the source code I’m documenting as reference. Think of it like provided “as is”.

So, basically, I’m going to follow the document in MSDN network. I think that will be the most helpful approach for other indies. Likewise, we are going to assume that the NuGet package Microsoft.Xbox.Live.SDK.Cpp.UWP has been downloaded and installed already.

1. Creating an XboxLiveUser object.

To fulfill the requisite of logging into Xbox Live, we need to first create a reference to the Xbox Live API. To do so, we specify the following include in the header file of our main game class:

#include <xsapi\services.h>

Now, on this header file, we also need to define the main variables to use for the Xbox Live integration:

// ***** Xbox Live Variables... ***** //
    int           lngXBLUserLoginStatus;
    std::shared_ptr<xbox::services::system::xbox_live_user> oMyUser;
    wstring          strLoginErrors;

Basically, lngXBLUserLoginStatus is a little flag that we are going to use so we can login in a background thread rather than make the player wait for the login process to complete. oMyUser is the Xbox Live user object and the strLoginErrors is a string so we can keep track of errors.

Now, within the constructor of our main game class, we default these variables as follows:

// ***** Xbox Live Variables... ***** //
    lngXBLUserLoginStatus = 0;
    oMyUser = nullptr;
    strLoginErrors  = L"Starting... ";

2. Sign-in silently to Xbox Live at Startup

As the MSDN guide suggests, “your game should start to authenticate the user to Xbox Live as early as possible after launching, even before you present the user interface” (msdn, 2017).

In my case, my games have two content loading routines: the first one loads the core of my game engine so 2D elements can be rendered. The second one is the one that loads sounds, 3D models and data structures. This is to fulfill a requirement of the Windows Store in which the game should be responsive within seconds so the player doesn’t assume that the game is “hung” at launch. This is important to mention because the Xbox Live login usually takes quite a few seconds.

// ***** Signing to Xbox Live... ***** //
        oMyUser = std::make_shared<xbox::services::system::xbox_live_user>();
        if (oMyUser != nullptr)
            auto oMyAsyncOp = oMyUser->signin_silently(nullptr);
                .then([this](xbox::services::xbox_live_result<xbox::services::system::sign_in_result> oMyResult)
                if (!oMyResult.err())
                        case xbox::services::system::sign_in_status::success:
                            lngXBLUserLoginStatus = 1;
                            strLoginErrors = L"Success";
                        case xbox::services::system::sign_in_status::user_cancel:
                            lngXBLUserLoginStatus = -1;
                            strLoginErrors = L"don't know";
                    lngXBLUserLoginStatus = -1;
                    wchar_t chrValue[200];
                    _itow_s(oMyResult.err().value(), chrValue, 200, 16);
                    strLoginErrors = wstring(chrValue);
    catch (exception oMyEx)
        strErrors = oMyEx.what();

To document the source code:

  • First, it creates the Xbox Live user.
  • Then, it creates an asynchronous operation for the sign in. Granted, this asynchronous whachamacallit looks and sounds complex. However, if you have created a game already using C++ then you may have identified that this is exactly what we all do when loading textures, shaders, 3D models, sounds and other data files – otherwise the game would take forever to start.
  • This asynchronous operation focuses on the “signin_silently” method of our user, using “nullptr” as input parameter to specify that we want the current logged user.

Funny stuff is that, what is within this asynchronous task is what is being described in the MSDN document. Following the guide, the sign in result is analyzed as follows:

  • If there are no errors and the status is “success” then we can assume that our oMyUser variable has good information. We indicate this by populating our flag lngXBLUserLoginStatus as “1”.
  • On the other hand, as the MSDN document describes, if the result is “UserInteractionRequired” (covered by the “default” clause) then we need to ask the user to log in for us. This status is indicated by a “-1” value for our lngXBLUserLoginStatus flag.

3. Attempt to Sign-in with UX if required.

If the silent sign in failed, it is required to authenticate the user to Xbox Live with the default user interface included in the Xbox Live API. I do this on the “Update” routine of the game menu as follows:

// ***** Signing to Xbox Live... ***** //
        if (lngXBLUserLoginStatus == -1)
            lngXBLUserLoginStatus = -2;
            .then([this](xbox::services::xbox_live_result<xbox::services::system::sign_in_result> oMyResult)
                if (!oMyResult.err())
                        case xbox::services::system::sign_in_status::success:
                            lngXBLUserLoginStatus = 1;
                        case xbox::services::system::sign_in_status::user_cancel:
                    lngXBLUserLoginStatus = -3;
                	wchar_t chrValue[200];
		            _itow_s(oMyResult.err().value(), chrValue, 200, 16);
		            strLoginErrors = wstring(chrValue);
    catch (exception oMyEx)
        strErrors = oMyEx.what();

In layman’s terms:

  • Since this little applet is on the “Update” routine of the game’s main menu, it means that, while the menu is processed, it is monitoring the value of our lngXBLUserLoginStatus flag.
  • If the silent sign-in failed (hence the -1 value), then it creates an asynchronous task to launch the “signin” method of our oMyUser in the background. A value of “-2” will avoid launching this task multiple times.
  • If there are no errors and the status is “success” then we can assume that our oMyUser variable has good information. Likewise, we indicate this by populating our flag lngXBLUserLoginStatus as “1”.
  • On the other hand, if the user cancelled (status of -2) or there were some errors that prevented the sign-process (status of -3) then we are excused of displaying the player’s gamertag. This may put a wrinkle if we are using some Xbox Live services, and we may need to add additional code to work around this issue… but that’s another blog that will be published at a different date.

4. Showing the Player’s Gamertag

This is not described in the original article, however there is no harm on showing the source code of how I am doing so:

void clsGame::DrawGamerTag()
        if ((lngXBLUserLoginStatus == 1) &&
            (oMyUser != nullptr))
            wstring strUserGreeting = L"Hello, ";

// ***** Drawing the gamer tag... ***** //

    catch(exception oMyEx)
        strErrors = oMyEx.what();

In short, if the user has valid data and the sign in operation has completed successfully, then I get the Player’s Gamertag and I display it on the screen.


There is one important note to mention: The authentication process does not ensure that the device is on-line. In fact, all my games have the “Internet Client” capability disabled, so all these tasks are done by the operating system on the game’s behalf.


The source code I just described matches well the information documented in the referred MSDN article. It may seem to be a little bit crude, specially since I’m using a variable as a flag instead of using a proper semaphore for data marshaling. However, since we are talking about only an integer, it is a rather controlled scenario. I really hope this code can help other members of the Creator’s program. It’s not quite the most optimal approach for a great on-line experience, but if your game focuses on local play then it should be good enough, at least as a starting point.

Dev Diary Entry #6: Emergency Upgrade


I’ve been interacting with many fellow Indie developers through on-line media regarding games running as a UWP App on the Xbox One. Mostly, I just bought my Xbox One a month ago (Black Friday in Canada is in October) and I wanted to “catch up” regarding game development for this console. The most outstanding issue I noticed in most posts I read was a performance concern.

As developers, we all hate surprises during the last phase of a project. Aiming to diffuse the pressure of the last weeks, I went ahead and started doing performance tests. What I found was rather upsetting: the performance of my game engine as a UWP app on the Xbox One was awful: it could only run fluently at 30 fps. For a state-of-the-art device, that was a rather alarming find.

After a thorough analysis (and a lot of cursing), I found out that the source of my problem was the console’s GPU restrictions and the outdated development environment I was using.

Part of my stubbornness about keeping the tools I’m familiar with is due to the fact that I’m one of the few indie developers that use (and praise) DirectX 11. I know of other cases, but all of them are very, very, very quiet about what they do and how they do it. As a result, when I have a critical problem, I’m pretty much on my own.

However, when I chatted with other fellow developers about my findings, Shahed Chowdhuri, a friend and technical evangelist, pointed out that, although old versions of the Windows 10 SDK restricted access to the GPU by half, the latest SDK, known as the “Creator Fall Update”, granted full access to the GPU. As plain as it sounds, this was the solution to my performance problem. However, since the latest SDK runs only on the latest version of Windows 10 and can be used only by the latest version of Visual Studio, this meant that I had to update absolute everything, from my console and PC to the core of my home made game engine.

Here is the article that was pointed out:

Upgrading the Development Environment

I was lucky enough to be able to reserve a second development environment with Windows 10. The roadblock here was that the Windows 10 Fall Creators Upgrade is deployed by regions, and my system hadn’t been selected yet. However, I was able to get the upgrade at the following link:

I downloaded a fresh copy of Visual Studio 2017, which included the latest SDK. Here is the link I used:

One day, without planning, my Xbox One S console updated itself to version October 2017. That was an easy upgrade.

Upgrading my Game to Visual Studio 2017

The next step was to upgrade my games to the new development environment. Here is where this procedure got painful: Although I was able to upgrade my game from Visual Studio 2015 to Visual Studio 2017, I was unable to re-target the target and minimum Windows 10 SDK version to the Fall Creators SDK: Simply put, the option for that version was not available in the drop down list.

I was kind of expecting a surprise like that. No matter, though, for I had a backup plan: Since all my code was encapsulated XNA style, I created a brand new project targeting the latest Windows 10 SDK and copied my code from the old project to the new one. That was what I did when I migrated from Visual Studio 2013 to 2015, so I was very familiar with the process.

I was happy to see that all the changes and enhancements in the new Visual Studio 2017’s “DirectX 11 App (Universal Windows)” Application Template were encapsulated and properly implemented. For instance, all the DirectX managers and handler classes are new and improved, but encapsulated in the DeviceResources wrapper class (see my previous post about the inside of a DirectX application). This made the upgrade so easy to implement.

Once this process was finished, I ran my tests one more time and I noticed a considerable improvement in the Xbox One console: I was now able to run my games at the desired frame rate of 60 fps, although it still stuttered frequently. The improvement was good, but not good enough to fulfill the expectation of gamers around the world.

Shader Compilation

After running the Graphic Debugger, I identified the bottleneck of my game engine as a particular pixel shader that draws in batch the background elements of the game (part of the DrawIndexedInstanced function implementation). For example, this is the one in charge of drawing the buildings in my third-person shooter. I re-wrote this shader so the code would be even more efficient, but I still couldn’t reach the desired performance. I was looking at some branching warning notes when I noticed something weird in the Visual Studio environment: By default, projects in “Debug” mode are compiled in such a way that all shader optimization options are disabled. Also, some debug functions are added to the shader. Likewise, projects in “Release” mode are compiled in such a way that all shaders are compiled with the optimization features enabled and all debug features are excluded from this code. In other words, projects need to be compiled in “Release” mode prior deploying it to the console for good performance.

I was about to test this theory when my Xbox One suddenly upgraded itself to Windows 10 NOVEMBER 2017 version. These upgrades are not exactly optional if we want to continue on-line, so I went ahead and upgraded my console.

Once I got all these changes done, I can say that my home made game engine now flies on the Xbox One S: It runs smoothly at 60 fps, it doesn’t stutter and when it does is because of my code, not the GPU. I have the capture video features on and I can record without jeopardizing performance.


For a desired 60fps game experience, we only have 16 milliseconds to draw a frame. Having half of that was a huge challenge for amateur developers like me. However, the Fall Creators Upgrade granted us full access to the GPU, and that is a huge improvement. Still, to be fair and objective, I have to be accountable for the lame HLSL code I wrote, along with the compilation configuration settings of the deployed executable.

Performance drop is a very complex issue and takes a lot of man power to solve. Most of the time, you don’t know you have a performance issue until you see your game stutter. I cannot say which upgrade / optimization (or combination thereof) fixed my game engine, but I’m really happy that it did. Still, this dev log entry has the steps I took documented, hoping that it will help other Indies with their projects.

Dev Diary Entry #5: Afterthoughts about Project Strategy


It’s mid November, we are heading towards the final weeks before the due date and, despite my greatest efforts, I am once again behind schedule. Things are moving forward, though, and, as a token, I’ve created a work-in-progress video, captured straight from my brand new Xbox One S.

The small detail that amazes me regarding Microsoft’s UWP technology is that, in this particular case, the very same executable works for both PCs and Xbox consoles – the Xbox control can be used in both cases, although gamers will most likely use the keyboard when it comes to PC. For my little tablet, though, I’m afraid I still need to recompile it for a 32 bit processor, but that will do the trick regarding touch screen control.

Once shown graphic proof of how the game is going to look, I can finally talk about the technology and tools used without raising serious credibility doubts. However, I don’t want to write a sad blog entry overly justifying the use of tools that have become so unpopular on this day and age, using arguments that nobody is going to believe anyway – that won’t give any added value to my readers, and I already have enough hate mail to deal with. Instead, let me address that same topic from a different point of view by talking about Project Strategy, using “Antimatter Instance” as an example.

What I mean by “Project Strategy” is the list of steps taken to identify the development path to follow in order to deliver a given project. Following a structured path will ensure that all efforts made are done congruently, efficiently and towards a common objective, scheduled within the given time frame. Sure, we can always “wing it”, but that usually leads to an unproductive and expensive learning experience rather than a successful implementation – no doubt it is fun, but most of the time is fruitless.

Note that these steps are based on my personal experience as an “Indie” rather than a serious Project Management research.

Step 1: Identify what you want to accomplish

Every single project I’ve work with, as an Indie and as an IT Professional, starts by identifying what exactly is what we want to accomplish. A work-related project focuses on fulfilling a need and making the life of our users as easy and simple as possible. A videogame, however, focuses on the “dream game” that we have always wanted to create.

No doubt this step is the best phase of project planning, as usually we have long, frequent brainstorms of the videogame we want to create. We get inspired from other games, we get ideas of our own, we read on the Internet what can be accomplished on this day and age, we put all of this together and define what we want to accomplish. The more we define it, the more we want to put everything aside and start working on it. It is this vision (along with unlimited supply of caffeine) that will fuel all the long nights ahead.

In my case, I always wanted to create a 3D fighter / action game of the likes of the blockbuster “God of War”, although with much less gore, no racy scenes and a wee bit more cheerful story. I even had the perfect name (which I’m keeping close to my vest for now), as well as the main characters design. There were five particular scenes that kept me inspired, and I even wrote down the dialogues of each one so I could remember them as detailed as possible.

Step 2: Identify the resources and skills at hand

Once we have defined what we want to accomplish, then we identify what resources and skills we have that can be of use. Each of us is unique and our “skill set” as independent videogame developers vary from case to case. This step also includes defining the budget we currently count on, as well as a timeline, if any.

In my particular case, I am a software developer /.NET programmer. I have some experience as a web developer and, about 20 years ago, I used to provide technical support to the image centre of a famous newspaper. These two last entries in my resume gave me enough knowledge to create and handle digital images.

Here is where things get interesting: About 10 years ago a huge project for a Silverlight solution presented itself, and my day job boss asked me to learn and master C#. As the obedient employee I’ve always been, I downloaded XNA and published three games (not on her time and dime, though I have to say that the Silverlight application was a success). To do so, I created a game engine that used differential frame animations for 3D models. It is like a script language that can be extrapolated to generate in-between frames (e.g. change the animation speed to anything and still be a 60 frames per second animation clip) and allows me to add (e.g. play two animation clips on the same model at the same time) and subtract frames (e.g. inserting a set of frames right in the middle of an animation clip). As any language script, the best tool to create these animation clips is Notepad. Eventually, I created a set of in-house tools to aid in the creation and editing of animation sequences. I call these my “in-house animation studio tools”.

Later on, when Windows 8 was launched, I migrating this engine of mine to C++ as a Metro-style application by copying and pasting code… although I did have to translate some classes like the “Generic.List” to native arrays (commonly known as “vectors”), as well as the “.” notation To “->” for object references. Also, I had to implement the “DrawIndexedInstanced” function in order to substitute XNA’s SpriteBatch class with something that would run on DirectX 11.

About the budget available for the Dream Build Play challenge, well, there is none. I’m actually running on a shoestring so there is no way I can incur in an unplanned expense.

The timeline is defined by the Dream Build Play 2017 Challenge. Originally, we were given 6 months to create an entry. Granted, a finished product is not required, and even a running prototype is acceptable. Still, this is a contest, and 6 months is a tight timeline to create anything from scratch.

Step 3: Identifying what we require to reach our goal

This is the discouraging part of Project Planning: This is where we identify everything that we don’t have that we will need to create our dream game. Unless we know exactly what we are doing, we realize more often than not that, unfortunately, we are not the right element for our very own game. No matter, though, for where there is a will then there is a way. Here are some alternative paths to acquire what we are missing:

  • Purchase: Game assets that we cannot create in-house can be purchased in an asset store.
  • Outsource: certain tasks that cannot be done in-house can be outsourced to an external team. This is a very dangerous option since most of the external teams bill by the hour, which is an approach that more often than not drains our fiscal resources providing very few results for our buck.
  • Learn: If the budget is limited and the project’s timeline is somewhat flexible, we can purchase a book or take an on-line course to acquire / hone the skills that we require to create our dream game.
  • Team-up with another indie: When a single project is too big for one single person team then it is best to create a team of people with skill sets that complement each other. If all members have the same skills then all team meetings can be described as a match about who’s right or wrong, and the whole project is reduced to following the instructions of the alpha member. On the other hand, if all skill sets complement each other then each member can focus on his/her area of expertise and create a masterpiece altogether.

In my particular case, here was my first reality check: I looked at my dream game vision, I listed my skill sets, I counted the resources at hand and I came to the sad conclusion that there was absolute no way I could materialize my dream game within the given timeframe. The only way I could still participate in the challenge was to reduce the scope of the game and start planning all over. So, I created a “spin-off” of my dream game in such a way that, instead of being a truly 3D fighting / action game, I focused on creating a side scrolling brawler which would be a 2D game but with actual 3D models. I created a new story and a whole new set of characters. It is not what I always wanted to build and the story is nowhere as appealing as the original one, but it has some interesting ideas that I’m looking forward to implement. That is how “Antimatter Instance” was conceived.

The advantage of putting my dream game aside and creating a spin-off instead is that I leave the biggest vision aside, hoping that one day I could revisit the project and implement it in the near future, perhaps even in the following DBP challenge. In the meantime, everything I create for the spin-off can be of use for the bigger project.

Step 4: Get the tools to use for the project

Once we know what we have and what we are missing, it is time to round up all resources, get the tools we need, install them and start working for our dream game.

Talking about “Antimatter Instance”, here is a quick list of the Microsoft tools chosen for this project:

  • Game Engine: The game engine selected for this project is my home-made game engine running as UWP application on DirectX 11. The main reason for this choice is because I am very familiar with this engine, and I’m quite confident that the animation tools I have for it can be used to create an entry worthy to participate on the Dream Build Play 2017 Challenge.

    To elaborate a little bit, all game engines available have its advantages and drawbacks. One of my previous games called “A Snowy Slalom” (proudly participant of the DBP 2012 contest) had a generation algorithm that created slalom tracks at run-time, which required the use of skinned meshes for the tracks, assembled in an array similar to a chessboard. Due to performance challenges, it was imperative that these track components should be drawn in batch mode, which was a need that reduced the number of commercial game engines that could be used at the time. Due to this technical roadblock, it made complete sense to create an in-house game engine using DirectX 11. Despite popular knowledge, it was not that difficult to create, and the blue print can be found in this other article that I wrote some time ago for the Microsoft’s tech wiki knowledge base. This same approach was implemented in my third-person shooter by generating mazes at run-time.

    However, a side scrolling brawler does not need to have the background drawn in batch since the number of individual meshes is way much smaller than in an open world game (to throw some numbers, 600 meshes in my slalom game compared to 50 in my side scrolling brawler, both at a 60 fps). Instead, the main feature to use is an animation engine with a sophisticated hierarchy and programmable flow that can emulate a sparring match. In this regard, I’m inclined to believe that my home-made game engine has reached this level (as seen in the featured video of this article), although I have to admit that it would take me a few months before I can reach the quality of the user interface that commercial game engines provide.

    There is one big advantage about using my good ol’ game engine: The model editor, included in the tools I created during my XNA days, allows me to mix and match mesh parts and bake them in a single model embedded in an X-File. For example, “Anabelle”, the main character of “Antimatter Instance”, is a re-incarnation of “Alice”, the lieutenant in “Battle for Demon City”, who in turn is a re-incarnation of “Danika”, the main skier in “A Snowy Slalom”, who in turn is a re-engineered character of “Aida”, the mermaid from “Merball Tournament” (well, at least half of her).

    The other advantage is that I can implement special animation effects to be computed at run-time, as seen on the work-in-progress video shown at the beginning of this article, like the animation of the hair, the skirt and the eyes, among other elements.

  • Development Environment: For this project, I have a Visual Studio 2015 development environment with Windows 10 SDK Anniversary Edition. I thought about upgrading to Visual Studio 2017 with the Creator’s Club SDK (edit: now there is a 2017 Fall edition), but the truth is that this kind of upgrades always come with an unexpected technical challenge and, because of the aggressive timeline provided, I don’t really have time to spend in any surprise.

    For example, when I migrated from Visual Studio 2013 to 2015, the internal class structure for a DirectX game changed, which was expected when changing from a Metro-style to a UWP application. What I was not expecting is that the helper class, used to load game assets in the background, changed as well, which broke my implemented object pool managers. This meant that no shaders, textures or sounds could be loaded anymore. It took me a while to fix them since I am not a C++ expert.

    Also, when I changed Windows 10 SDK versions from the UWP version (10240) to the Creators version (15063), the DirectX SDK could not be compiled anymore. I don’t have the exact error message, but it was something like a particular function within the DirectMath library was returning a result in an unsupported format, and therefore the DirectMath could not be compiled. Later on I found out that there was a hot fix for the header file (it’s documented in GitHub), but I felt rather nervous about changing official libraries. At the end, I went one version behind and decided to stay with the Windows 10 Anniversary Edition.

  • Test Environment: As a UWP game, I’m targeting consoles, PCs and mobile devices. To test the game, I have an Xbox One S, a PC with an NVidia GeForce 8500 and a Tablet with an integrated Intel HD Graphic adapter. To elaborate, the Xbox console’s technical specs are standard. However, the technical specifications of PC’s are widely different from unit to unit. Same problem is present with mobile devices. The Microsoft Development Network suggests using an Atom device (that would be my tablet) to test the game. The line of thought I’m following is that, if the game runs on my tablet, then it can pretty much run anywhere.

  • Sound: The tool I’m using for background music and sound effects is Microsoft’s DirectMusic Producer. This is one of the hidden treasures that failed to get the recognition it deserved when it was published – personally, I’d have given a standing ovation. It can no longer be used to create the “mood” at run-time (since DirectSound is now deprecated), but it can still be used to create and edit midi files. In fact, I created the background music in the work-in-progress video with this tool. It was one of the first pieces I wrote back in the XNA days.


I am well aware that this should have been the very first entry in my developer blog. After all, it is rather pointless to list the tools to be used for a project when the project itself is half way done. On my defense, if I have stated on day one that I was going to use Notepad as my primary tool, I would have lost all my readers right then. On the other hand, If I create a work in progress video showing how the game is probably going to look like and then, and only then, I state that I was going to use Notepad as my primary tool then I would have kept my readers… at least for the duration of the video.

As an “indie”, the skill set I have puts me in a unique position:

  • I am a software developer that makes his living by creating custom applications that fulfills very specific needs. That is why, when I don’t have the tools to achieve something, I create them. It’s like second nature to me, and I have to admit that it is part of the fun about being an independent developer.
  • On the other hand, I really want to be an artist. I know, my art sucks, but that doesn’t mean I don’t enjoy creating masterpieces. “Antimatter Instance” has given me the opportunity to create some quite fluid animation clips. True, the in-house approach means that only my game engine will be able to play those clips, but that’s ok since I don’t think anybody would hire me as an animator anyway.

These two roles I play as an “indie” complement each other, and have been the factors that defined the technical framework and the tools I use to create videogames. Everybody is unique, and I don’t expect that the same approach may work for someone else. But I really hope this utterly long post may provide a good insight for other game developers.

Dev Diary Entry #4: I Just Had to Add Motorcycles


About a month ago, while trying to figure out why a particular EDI Purchase Order system was not working properly, I got a vision about a scene for my video game that involves motorcycles driving by a lake. I know, it sounds like a delusion from alcohol poisoning, but the scene was very vivid and I really wanted to implement it. I pondered the pros and cons and after a while I concluded that it was already late September, thus there was no way to accomplish this scene and finish the game on time before the end of the year.

Weeks passed by, the scene kept popping on in my mind over and over (the EDI system kept failing too)… and then I decided to reconsider. Once again, I pondered the pros and cons and at the end I changed my mind and went for it. After all, we all do this for fun and, if I’m not happy with what I’m delivering, then there is no point in doing it anyway.

A Software Developer would have gone to an asset store and buy the 3D model that would fit best. However, a Graphic Designer would never let escape the opportunity of modeling such a stylish vehicle. I mean, if we were talking about a dirt bike then anything sturdy would do. But the game makes reference to some astrophysics concepts so I need something futuristic.

So I went ahead, undusted my drawing pad and started to indulge this nagging impulse I got that I knew it could very well cost me the project.

Besides, The last two development log entries were very technical (there is a gambling pool about the percentage of readers that will actually read the entire last article), so I thought it would be a good time to “flip hats”, cross to the artistic side of game development and share with other indies what works for me when it comes to create a 3D model. It may not be for everyone, but it works so well for me that this technique deserves the benefit of the doubt.

Step 1: Research

The very first step that any modeler should do is to research about the topic. Even if we were already experts, doing research will activate the right side of the brain and inspire our creative skills. Due to copyright laws, I cannot paste pictures on this article (well, I could for editorial purposes, but I don’t want to push my luck). It will suffice to say that browsing the Internet is a great (and cheap) way to learn about the topic at hand and get an idea about what the 3D model should look like.

Besides, I know absolutely nothing about motorcycles.

Step 2: Orthographic Drawing

When I was a young lad, I took a course about Technical Drawing and Industrial Design. Back then, I didn’t fully appreciate how useful the learned techniques were. Today, I know that this step is paramount for 3D modeling. Ok, granted, if you are modeling a character that looks like a sponge then by all means grab a cube, stretch it, add some eyes on it and you are all done. However, if you are creating something as stylish as a racing motorcycle then the orthographic drawing should be the first step to take.


Ok, this drawing is awful. It is, by far, one of the most inaccurate orthographic drawings I have ever done, to the point that my technical drawing teacher would fail me in an instant. However, I’m on a hurry, and this drawing has all the information I need to start.

Step 3: Dissection

The first information that pops out from an orthographic drawing is the number of individual mesh parts that will be needed. Ok, granted, I could approach this as one big chaotic piece of clay. However, modeling is much easier if the object is divided in mesh parts and addressed them one at a time, even if at the end we may merge them in one big, not-so-chaotic-anymore, piece of clay.

The list of mesh parts came out as follows:

  • Main Body
  • Shield
  • Front Wheel (including steering handle)
  • Back Wheel
  • Side Mirrors
  • Tail Lights
  • Escape Muffler

Step 4: Digitalization

Actually, the next step should have been to create an orthographic drawing for each and every mesh part identified in the previous step, but let’s face it, no one has the time to go through it, and it would be an overkill feat if we consider than all I need is a low polygon count 3D model.

So, instead of wasting time, I jumped to the step where I scan the orthographic drawing, ditch the pencil and paper and proceed with the digitalization process.

1. The first thing to identify is the center of mass. This is, by far, the most important coordinate. All axes will start from this coordinate. To identify the center of mass, what I do is to answer the following question: if the ground were to blow up, how would this model fly away, assuming no dismemberment would happen? As the object flies to its doom, the center of rotation would be the center of mass.

As Albert Einstein himself would say: “This is a thought experiment”. Do NOT do this at home. Or the office.

In case of my motorcycle, the center of mass is identified as the big bright green dot right in the middle (see next figure).

2. The next thing to do is to identify all axes. Usually, it’s one axis per mesh part. Since this is a man-made machine, this task is pretty easy. This task is also very important since some of this axes will become a “bone” in the 3D model’s skeleton (call it a “skin weight”).

3. The next step is to convert each mesh part into key vertices, assuming that a straight line between them will reproduce the original image. Think of it as the math version of drawing by numbers.

The orthographic drawing is very useful to identify key vertices, and I do this by putting a colored dot on all of them. Pretty much there must be at least one on every curve. The more vertices, the smother the curve will be. The projection lines are also an obvious indicator of where vertices are needed. Also, I add vertices in groups of four, assuming a transversal cut of the mesh part. This will be very useful later on.


Now, each vertex must be identified in at least two views so I can measure the three coordinates needed to identify a vertex in space. In my case, I used the lateral and the top view. The front view was rather useless because a motorcycle is a rather complex object when seen from the front.

4. The last step of the digitalization process is to measure the distance from the mesh part’s vertices to the mesh part’s axis. Most graphic software have a tool to perform this measurement, although for simple models the pixel position minus the axis coordinate will suffice.

Step 4: Data Analysis and Vertex Extrapolation

Orthographic drawings are really useful for the modeling process. However, they can only give vertex information for at most six views. In order to create a 3D model that can be rotated at will, additional projections at different angles will be needed. The more projections, the smoother the 3D model will be. There are drawing techniques using fancy rulers that can be used to create the required orthographic projections on paper, but let’s face it, no one has that amount of time.

This is where software tools of the likes of “AutoCad” come to play: From the key vertices that were digitalized during the previous step, I extrapolate the information and estimate the location of vertices of different projections (hence the name “Key Vertices”).

However, I have no budget for these software tools. No matter, though, for a similar analysis can be done using Microsoft’s Excel, assuming cylindrical coordinates.

Here is the plotted collected data for the main body mesh part (note the blunt sectional cuts on the right).


Here is the first iteration: Projection at 45 degrees.


Here is the second iteration: Projection at 22.5 and 67.5 degrees.


I could continue, but these two iterations are more than enough for a low polygonal count 3D model.

Likewise, I did the same for the Shield and the Side Mirrors.


All other mesh parts are blunt cylinders so there was no need to get lost in calculations.

Step 5: Model Baking

Once all vertices have been identified, they can be put together in order to create the “faces” that will become the polygons of the 3D model.

I have a confession to make: I really, really wanted to use Paint3D for this project (Microsoft Contest = Microsoft Tools = Microsoft Technology). However, I couldn’t find a way to enter per-vertex information in this application. No matter, though, for there are plenty of other tools that can accept per-vertex information and bake them in a 3D model. The important knowledge here is how to create a model from a drawing all the way to the vertex buffer. In my very own particular case, I created a tool back in the good ol’ XNA days that will take per-vertex information from Excel and convert it to a 3D Model in an X-File. If I didn’t have this tool, I’d dump the information using Notepad in an OBJ file (although this file format does not support skin weights).

Step 6: Painting

Once all vertices have been baked, the end result is a monochromatic model on the screen. Depending on the software used for baking, a texture can be associated, which implies the calculation of texture coordinate for every vertex. In my case, I rather skip the texture and focus instead on a material color approach. Later, at run-time, I can change the texture at will so I can have the same model in different colors / ensembles. The final result can be seen in the video attached to this page.


The process that I follow for 3D modeling may seem a little convoluted for many fellow indie game developers. However, the important knowledge are not the choice of software tools, but the process of how to create a 3D model starting from the drawing board (literally) all the way to the data collection phase. If I were to use Maya, Blender, Colada or Paint3D, I would always start with the research and the orthographic drawing anyway. This process gives me a very tight control of the position of each vertex that translates to an accurate 3D model of the drawing at hand. As you can see in the images and the attached video, it really works for me. This motorcycle took me six days of my precious free time (about 15 hours in total – damn hockey games)… and I enjoyed every minute of it.

Dev Diary #3: Interaction between XAML and DirectX 11


Foot sliding is a common problem in computer animation. In short, for example, if you create a “walking” animation, it is expected that the feet stick to the ground on every other step as the character moves forward. If this character does not move forward at the same speed as the feet move, it gives the impression that the feet are “sliding” over the floor. The solution of this problem is to adjust the absolute translation speed of the character by adding the relative speed of the sliding foot to the floor. In other words, if the foot slides backwards then increase the speed of the character, or, if the foot slides forward then decrease the speed of the character.

A walking animation is a rather straight forward fix. However, for much more complex animations like a spinning hook kick, then the adjustment is not that easy to define.

Most animation tools come with a way to fix this problem automatically… or at least some good developer has come with a solution and he has been kind enough to share it with the world in the form of a script, plug in or add-on. However, since I’m doing my animations with my very own animation studio, then it seems that I’m out of luck.

No matter, though, for if there is a will then there is a way. So I went ahead and created a custom tool to solve this very special need (after all, that is what I do for a living) by trying a very special type of project in Visual Studio where XAML and DirectX coexist together. I won’t lie, this was not an easy experience: The learning curve was very steep and unfortunately the documentation is very difficult to follow. However, once the tool was done, I was able to grasp the main idea behind this kind of projects, and now I can see the benefits of combining XAML and DirectX.

I’m surprised no one has published anything about this type of project. I’m not talking about marketing brochures or news releases (there are plenty of those): I’m talking about the coding side of the story, expressed in terms that are easy to understand. So I decided to create a blog about it.

XAML and DirectX

For those who are really old school (like me), I need to make a quick parenthesis and define XAML as a markup language created by Microsoft, kind of like the language used to create web pages. With this language, we can create quite impressive User Interfaces and implement them using a wide range of technological platforms to create business oriented applications (msdn – 2017).

With the release of the Windows Store and the concept of Universal Windows Platform Applications, XAML has had a huge momentum. It’s been such a success that Microsoft thought about mixing XAML and DirectX. Ever since Visual Studio 2012 was released, if you want to create a videogame or a multimedia application for the Windows Store using DirectX, you will find that this API comes in two main flavors: With and without XAML.

Now, what I learned from the process of creating this custom tool is that the combination of XAML and DirectX is great for Multimedia Applications, but it is my conclusion that this combination may not be quite optimal when building action games.

Before getting to the reasons behind this conclusion, let me show a video that illustrates the interaction between DirectX and XAML so we can have a graphical reference about the topic.

To understand what just happened, we need to look at the block diagram of the inside of this application, and compare it to the “traditional” approach, which I mean, the one without XAML.

Traditional Block Diagram

When you create a DirectX 11 App (Universal Windows) Project using Visual Studio 2015, the template created (excluding the “Game” class – that one is on me) has the following block diagram:

DirectXClass Diagram

  • The App class is the starting object, and it is in charge of interacting with the Operating System. Windows will load, unload, activate and deactivate the game by accessing this object.
  • The Device Resources class is a wrapper that will help us deal with DirectX API. This frees us from dealing with the most obscure objects from DirectX.
  • The Renderer class is the object that will load all our graphic resources and keep them in buffers, and will wait patiently until it gets the chance to draw them.
  • The Main class is, as it sounds, the main object of the game.
  • All the previous classes come included in the DirectX project template. What I like to do is to create a “Game” class and encapsulate all my routines there. I do that for portability purposes so, if I ever need to change versions of Visual Studio, I just need to copy the game class and beyond, knowing that the main hooks will apply for any other template (this was a life saver when I migrated from Metro style apps in Windows 8 and Visual Studio 2012 to UWP in Windows 10 and Visual Studio 2015).

The dynamic here has some similarity with a typical XNA project:

  • There is an infinite loop in the App class that triggers the “Update” routine, then the “Draw” routine and finally the “Present” routine. This is what most people call the “Game Loop”.
  • The “Update” routine will call the “Update” method of the Game class, putting all components in motion.
  • The “Draw” routine will call the “Draw” method of the Game class, rendering all graphics.
  • However, all input data is handled by the “App” class. What I do is that I populate input information in structure variables that the Game class will interpret during the “Update” call.

XAML and DirectX Block Diagram

When you create a DirectX and XAML App (Universal Windows) Project using Visual Studio 2015, the template created (excluding the “Game” class – that one is also on me) has the following block diagram:

XAML and DirectX Diagram

  • The “App” class is still the starting object, although that’s pretty much all its functionality.
  • The “DirectXPage” class is the actual XAML page. This is where we would implement all our user control elements for input data. This class also takes over of almost all input events.
  • The Device Resources class is still the wrapper that will help us deal with DirectX API, freeing us from dealing with the most obscure objects from DirectX.
  • The Renderer class is still the object that will load all our graphic resources and keep them in buffers, and will wait patiently until it gets the chance to draw them.
  • The Main class is still the main object of the game.
  • All the previous classes come included in the DirectX project template. As in the traditional approach, I also recommend creating a “Game” class and encapsulate all game routines there, for portability purposes.

The biggest, meanest, most complex and most important difference that for some odd reason nobody has ever remarked, is that this new approach is a true multi-thread application.

Here is how things get on fire:

  • The “App” class creates an instance of the “DirectXPage” class, which in turn creates and launches two independent threads, each one with its own infinite loop:
    • The first loop is launched to link input events from pointing devices, like the mouse or a touch screen.
    • The second loop is the actual Game loop, triggering the Update call, the Draw call and the Present call.
  • Aside to all this, the DirectXPage, as a XAML page, has its own thread, usually referred as the “UI Thread”.

To describe in colloquial terms the implied challenge, we can compare the threads of a multi-thread application with the spoiled children of a highly dysfunctional family:

  • Every single one of them hates to share.
  • The communication between them is reduced to the bare minimum.
  • If one of them ever breaks protocol, we get such a drama that will last until the next Thanksgiving dinner.

It gets more volatile if we consider the nature of these threads:

  • No one but the UI Thread can interact with UI elements in an XAML page. That’s almost a cardinal rule.
  • DirectX has some roots from the good ol’ COM API. This means that objects created by one thread can only be accessed by this very same thread. If a different thread touches it, the whole thing crashes – I learned that the hard way.
    • Ok, granted, there is a way to make DirectX be able to handle multiple threads, but the procedure to follow is waaaaaaaaaaay above my level.
  • The input thread is a wee bit more flexible. However, this is the one in charge of communicating input data to the other two drama princesses. It’s still doable, but it is like walking on eggshells.
  • The class hierarchy doesn’t help either:
    • The “DirectXPage” class knows the “Main” and the “Game” classes. However none of these last two classes know the “DirectXPage” class. Adding an include entry in the header file as a reference to the DirectXPage creates a circular reference that generates an error at compiling time. I’m sure there is a way to fix this, but at this point I have absolute no idea how.
    • Even if the circular reference is solved, only the UI Thread can access UI elements, so there is not much benefit here.

The solution I found was to abandon the idea of a direct communication between all components and, instead, I implemented an indirect communication through the use of messages stored in a common repository (a bunch of variables known by everybody).

  • The “Game” class is known by everybody. As such, it becomes the perfect place to store our messages.
  • The “Main” class template comes already with a “Critical Section” object, which is the one to use to regulate access to our shared repository to read and store messages.
    • For those who are not familiar with a multi-thread application approach, this “Critical Section” object is also known as “The Bathroom Key” (Purwar – 2013): In order to use the premises, you need to get a hold of this key. Once you finish your business, you release this key for the next person to use.
    • Likewise, if any thread wants to access the common repository, it needs to get a lock of the “Critical Section” object, and release it once it is done.
    • I assume there is no need to say that in both cases you need to clean the aftermath mess on every use.
  • Following this approach:
    • Every time the UI thread wants to say something to the “Game” or the “Main” class, it needs to get a lock of the Critical Section object to avoid any conflict. I have labeled these messages in the diagram as “BG Events”. In my case, I’m using an enumeration. I know, it doesn’t really fit the formal definition of an “event”, but it is close enough, and it kind of works for our purposes.
    • The main game loop needs to get a hold of the Critical Section object every single time it calls the “Update” and “Draw” methods of the “Game” class, ensure that only one thread ever access objects created using the DirectX API. This gives the “Game” class exclusive access to the common repository.
    • Now, on every “Update” call, the “Game” class checks for any messages (or “BG Events”) that need to be processes. If so, it does what the message requests.
    • If the “Game” class needs to send information to the UI Thread, it leaves a message (what I have labeled as a “UI Event” in the diagram) in the common repository.
    • The UI Thread seldom runs as fast as the main game loop. However, it has an event that is called on every Draw call (called “CompositionTarget::Rendering”). This is the perfect place to get a hold of the Critical Section object and process any message coming from the Main Game loop.
    • Likewise, the Input thread will get a hold of the Critical Section object when it wants to communicate input data to any of the other two threads.

Benefits of using XAML

After reading all this rambling, you may be wondering why should we use this template if the implementation of XAML and DirectX is that complex. The answer is in the benefits provided by XAML:

  • A XAML page grants access to some very useful controls, like Textboxes and the File Open dialogs. This saves A LOT of work.
  • XAML runs on its very own UI Thread. By doing so, we don’t need to worry about a stutter in the UI controls, knowing that the Main Game loop will still run at the desired speed of 60 fps.
  • DirectX has always had a problem displaying text: Processing a long string is a tall order, considering that it is processed and drawn 60 times per second. If XAML is left with the task of displaying text, we reduce the risk of dropping from the desired 60 frames per second.
  • Sometimes, informational data doesn’t need to be updated at the same speed as the game loop. For example, the health bar of the player could be updated just a couple of times per second, making this element a perfect candidate to be displayed by the XAML page.
  • As a game developer, if you want to split the screen in two (one half for each player), a XAML page should be able to accomplish that easily… although I have to confess that I have never done this – no one to play with in a one-man team.

There are some reasons that XAML may not be an important component:

  • Based on the diagram shown previously, the game loop will get exclusive access to the computer resources on every “Update” and “Draw” call, leaving only a very small percent to the other two threads during the “Present” call. This could make the UI thread even slower, especially for hi level processing games.
  • Messages sent between the game loop and the UI threads are at least one frame behind. This could be the difference between winning and losing if the controls have been coded in XAML.


After I created my custom tool, I was able to grasp the “big picture” of mixing both XAML and DirectX. It wasn’t an easy project, and the development strategy described here about messages stored in a common repository may not have been the most optimal approach. I worked before with XAML during the Silverlight days and I have to say that I didn’t find much difference when used for UWP applications.

However, I found controls like the “Open File” and “Save File” dialogs to be essential for Multimedia applications (for me, it was a project saver). Also, I can see the performance benefit that RPG games could get when using text and input controls implemented in an XAML page (for example, sending messages between players using a XAML textbox while still rendering their fancy graphics on the background).

Still, for action games where control response is paramount, I’d still code everything within the main game loop, which makes a XAML page kind of an optional component. Then again, that is my personal point of view.


  1. “What is XAML?” Microsoft Development Network. 2017. Web.
  2. “Multithread: How to Use the Synchronization Classes” Microsoft Development Network. 2017. Web.
  3. “Understanding Thread’s Synchronization Objects with a Real Life Example”, Priyank Purwar, 23 Jun 2013. Code Project. Web.

Dev Diary Entry #2: Using Microsoft’s Graphic Debugger


When the DreamBuildPlay 2017 challenge was announced, I immediately put aside all personal projects I was working on at the time (they were all rather boring anyway). I created a new instance of my game engine, copied some animations from my previous games, baked some draft 3D models and used all that to build a quick prototype. Despite of being placeholders, when the main character started kicking butts (literally), I got pretty excited. As I went further on, I created more animation sequences as well as a new “combo attack” manager, and I was in the middle of adding some fancy graphics… when I noticed a small stuttering every now and then. I got somewhat alarmed since having performance problems this early in the development phase is pretty much a dark omen.

Wondering what on Earth I had done wrong this time, I ran the Graphic Debugger feature included in Visual Studio. The numbers I got were unexpected, but they pointed out what my problem was. I fine-tuned my code and soon enough I had the performance issue under control – I hope. The process was so useful and smooth that I thought about writing a quick blog about it to share the experience.

Microsoft’s Graphic Debugger

To give a quick introduction history, the Graphic Debugger is a feature in Microsoft’s Visual Studio, introduced in version 2012 Professional Edition and became widely available with Visual Studio 2015 Community Edition. The early version was a little bit confusing. However, the new version is so easy to use that, in just a few clicks, I got an analysis of the tasks that were run by the computer’s GPU, along with the time it took to execute each and every one of them.

Diagnose Procedure

So, without further ado, let me give a quick account of what was done in hope that this same process could be useful to other indie devs: Having my project loaded in Visual Studio, I went to the “Debug” pull down menu, then “Graphics” and then “Start Graphics Debugging”.


The Graphic Debugger application requested to run in high elevation mode, and then my prototype was launched. As the game session went by, a number in the top left corner kept me informed about the time it took to draw the current frame. This number never went below 17 milliseconds (equivalent to 60 frames per second), which is consistent with the fact that my game engine synchronizes with the display’s refresh rate (as we all should, for there is no point of presenting frames that the player will never see). I pressed the “Print Screen” button a couple of times to take snapshots of the frame at hand, and then I quit the prototype.

The next screen showed me a summary of the collected data.


In short:

  • The first graphic is a plot of the time that the GPU took to draw the requested frame. Overall, each frame too around 17 milliseconds to draw, for the exception of six instances:
    • The biggest chunk (which can be interpreted as a drop in performance) happens at the beginning of the session. This is expected since that is when the game loads most of the assets in memory, making the game a little bit unresponsive for about 10 seconds, just before the main menu is shown.
    • The second biggest chunk happens when the game loads all assets needed to execute the first level of the game. Likewise, this drop is expected.
    • The next two chunks, linked to an orange triangle on top, are the screenshots taken for the frame analysis. These two drops are also expected.
    • The last two chunks, and this is something I need to work on, is when the game compiled, at run-time, the animation sequences for the main character. In other words, this is a problem and I better add some code to cache those animations.
  • The second graphic on the Report tab is the same information as the previous plot, but seen as the number of frames per second. In other words, this is the mathematic “inverse” of the previous plot.
  • The bottom section has a list of individual frames that were captured by the diagnose tool.

From the frames listed at the bottom, I clicked on the second frame (the first one usually is not that accurate). The next screen provided a detailed list of steps executed by the GPU for that given frame.


Those tasks that have the icon of a black paintbrush have a screenshot attached to it, so clicking on them will show, on the right, what was being drawn at the time.

To the right of the “Render Target” tab, there is a tab called “Frame Analysis”. On that tab, there are two options: “Quick Analysis” and “Extended Analysis”. The latter option breaks all the time, but the first one is good enough for most diagnoses.

After a quick analysis, the application shows a report of the collected data for the given frame.


The first section of this output report puts the collected data per task in a bars graphic, focusing on the time it took to process each of them. When seen like this, it is very easy to spot problematic draws. The second part of the report has the information collected in numbers. Likewise, the potentially problematic draws are highlighted in orange (actually, it’s salmon, however this is technical post – the graphic design ones come later).

On the screenshot shown, in both sections, there are two tasks that pop out: tasks 467 and 470. Going back to the “Render Target” tab and clicking on the suspected tasks, I found out that my little trees shown in the background were being a process hog for my game. Altogether, these two tasks alone were consuming a third of the 17 ms threshold for a 60 fps game.

Data Analysis

Although I did implement a custom routine to have these trees drawn in different colors (part of the fancy graphic feature I was trying to implement), the shaders I created were not in any way complex. Moreover, these trees have a very low number of polygons (as seen on the report, each one has about 174 polygons) and that is why I was so surprised about them being the culprit.

Anyway, long story short, the problem can be summarized as follows: these trees are huge. It’s not quite evident on the screenshot, but these trees are instances of a 3D model, and that is why they look different based on the angle of view. This means that some branches are drawn on top of each other, using some transparency effect, meaning that the color of a given pixel is not final until all branches are drawn (not to mention that these trees overlap each other at some point). Now, given that a pixel shader is executed at least once for every pixel on the screen, and that the current resolution of the screen is 1680 x 1050 pixels (making the trees about 500 x 500 pixels), this is A LOT of calculations, especially for texture sampling. Moreover, as the report shows, the GPU tried to draw 18 trees (9 on each pass), however only 8 of them are actually visible, which means that almost half of the time spent was pretty much wasted (I state that almost half of the time because I still need two pair of trees on each side in case the player needs to go side to side).


Usually, every time I work with custom shaders I run the Graphic Debugger to see how many things I broke in the process. Knowing where the time is allocated when drawing a given frame helps me focus my attention to specific shaders. In this case, after reviewing what I had typed, I did find a way to optimize the background trees’ pixel shader and that took care of the stuttering – for now. To be fair, my development system has an NVIDIA GeForce 8500 GT, which has a pretty poor performance (it has a PassMark of 139, making it pretty far below on the list), so, if my game can perform well on this system then it should run fine on most computers out there.

I will continue to use the Graphic Debugger as often as I can. However, if in the final product you see brick walls instead of trees, then you know what happened.

Dev Diary Entry #1: The Game is Afoot

Note: The following article was published previously in my personal blog. However, in order to fully comply with the DreamBuildPlay 2017 Challenge rules, I decided to create a separated blog for my game “Antimatter Instance”. I can understand the benefit of having a dedicated blog for it: In that way, the audience focuses on articles based on the DBP entry, while other non-related personal posts are kept away. Anyway, without further ado, here is the original article:

When I saw that the DreamBuildPlay 2017 Challenge was announced, I was speechless. It was like the good old days were coming back, but this time re-engineered for a multi-platform approach.

Back in 2011, this very same competition was the one that motivated me to really learn about creating video games, and ever since then I’ve been absorbed by this hobby of mine. I mean, creating games is great, but without a strong motivation, everything I’d done pretty much never went beyond the status of a “doodle”. Ugly doodles, to be honest. Instead, the DBP challenge was the “carrot” that kept me pushing myself to publish the best project I could ever deliver. I got the chance to participate a second time in 2012, and I was really looking forward to the next one. Unfortunately, no new competitions were announced for the next years and, instead, Microsoft announced the “sun-setting process” for the XNA Creator’s Club.

After that, a few contests were announced for specific third-party technologies, but none of them were attractive enough to be a real motivation. Things were really slow for a while until the DreamBuildPlay 2017 Challenge was announced.

The dates are a little bit unforgiving, though: The announcement was in July and the due date is on December, which gives us about 6 months to create an entry. For those who have experience publishing games, we know that this time frame is pretty much the biggest challenge to overcome since usually it takes about a year to create a good, playable game. That said, the most likely strategy that most of us will follow is to grab and “package” several assets in one single prototype, thoroughly test it, certify it as per the Windows Store standards and submit it at least a week prior the due date (Internet bandwidth can be really mischievous during Christmas holidays). The game gallery could be reduced to a list of pretty proof of concepts… unless the contestants have already something to start with.

Here is where one of the biggest advantages of using a game engine comes to play: flexibility. As a developer, if you already have a game published using a flexible game engine, you can create a complete different game by re-configuring this very same engine in such a way that it will perform different tasks. I’m not talking about “re-skinning” an already published game – that would be cheating. What I mean is that the same game engine could be used to create, say, a “Platformer” or a “First-Person Shooter”, depending on how it is configured. Of course, creating two different games still require a lot of effort. However, the basic tasks would be already taken care of.

Most 3D game engines are quite flexible in this regard: Use a different set of animation sequences, configure a different camera angle, then just implement a new game logic and a brand new game is created – all drawing, event handling, asset loading, object caching and even game states are taken care of, automatically.

So, that’s pretty much the strategy I’ve decided to follow: The same game engine that I used for my “Third-Person Shooter” and my “Sports” game, will be the core of a Side Scrolling Brawler that I have titled “Antimatter Instance”.

Here is where things get tricky: under the hood, the biggest difference between these three genres is the massive amount of animation sequences to implement in a fighting game. It’s not just about strikes and combos, but also about hurt animation from different angles, times two (the original and the “mirrored”), times the amount of game characters to implement. Even though I was able to create a working prototype within the last month, the sheer amount of work ahead just in animation sequences will be massive, and could be the factor that could prevent me to deliver on time.

I mean, the main character uses a technique inspired in Taekwondo martial art. The henchman currently running in the prototype has a technique inspired in Boxing (not an Olympic level, that is horrible to look at, but instead inspired a more professional level – thanks YouTube!!!). The character in the current covert art is one of the bosses, so the use of a Bo as a weapon is yet another set of animations to implement… and the list goes on and on.

Now here is a question that most readers are wondering: How come a brawler is named after an astrophysics concept? Well, to answer that, we just need to play the game.