Any technology which is distinguishable from magic is insufficiently advanced.

Job Interviews for Game Programmers

Posted: January 24th, 2016 | Author: | Filed under: Soapbox, Tutorials | No Comments »

Wow, 2 years without a post. Time sure flies when you’re having fun! Since I happen to know that a lot of my friends and former students are currently partaking in the great job hunt, I figured I’d write on this, given the large number of interviews I’ve ended up conducting over the years.

The most important thing to realize in interviews is that time is precious. During this time, the interviewer has to find out as much about you as possible and figure out where you might best serve the company. There is an objective behind every single question asked, no matter how trivial or complex. They do not expect you to be able to answer every question flawlessly, but instead seek a better understanding of your personality, aptitude and experience based on how you tackle the problem. So do not panic if it seems you are drowning in the unknown. Just breathe in and embrace the liquid mayhem.

There are 4 basic qualities that I personally look out for in interviewees. They are not the guidelines dictated by any specific company, but have slowly evolved in my head over time. These are them in order of importance.


It is vitally important that you are able to express your ideas clearly, and understand what is being conveyed to you. If you do not speak the native language of the company well enough to be understood, that is a blocker.

More than that, the way you speak reflects upon the way you think, regardless of any language impediments you might have. A good interviewee would speak clearly, concisely and to the point. A bad one will ramble on, beating around the bush before eventually (or sometimes not at all) getting there. This is not a high school test where if you hit the right keyword, you get the marks because it is in the marking scheme.

Often, I will ask a deliberately vague question to see if the interviewee will clarify the problem parameters, and if not, which direction they go in and what assumptions they make.


One of the main tasks of the interviewer is to figure out what level you are at, how valuable you would be as an employee (i.e. how much they should pay you), and whether you would be suitable for a lead role. From simply talking with many people, I’ve come to recognize that there are certain basic qualities that allow easy categorization regardless of discipline (i.e. this works for artists, producers, etc. as well). As much as people try to over-represent themselves during interviews (from either ego or deceit), these are aspects that simply cannot be falsified. Job experience and academic achievements do not factor into this assessment.

Non-starters are people with basically zero aptitude. They fail due-diligence questions like “What is a dot product?” or “Describe inheritance in OOP.” I have encountered Masters graduates who could not write a single coherent line of code. Sad day.

Juniors have a good grasp of the concepts and are eventually able to solve problems. They will often fail to consider edge-cases and do not necessarily grasp the implications of the smallest of their decisions, at least until it is pointed out to them. However, good juniors are humble, eager to learn and are reasonably good at solving puzzles.

Mid-level engineers are insufferable because they know just enough to be dangerous. Their hallmark is over-thinking and over-engineering. They will often have elaborate and complicated designs to circumvent pitfalls they encountered in the past while opening up a dozen more. They look for “tricks” in the interview questions and try to pre-empt them, often fouling up the basic answer in the process. They almost always think they are much better than they actually are, not necessarily through hubris, but because they haven’t yet developed an appreciation of the breadth of things they are unaware they don’t know. They write know-it-all articles, much like this one.

Seniors are hard to find. They have developed a consistent internal framework through which they can solve the majority of problems they have not encountered before. They have a holistic sense of the game development process and are empathetic to how their choices affect the project, and how solutions and/or compromises can come from other disciplines. Their solutions may seem magical or out-of-the-box to others when in fact, they simply have a bigger and better organized box. When they encounter problems they have no immediate answer for, they get excited because it is a learning opportunity and a chance to expand their box even more.


Most companies are looking to fill a gap. We need somebody to do X and X and X. The larger the company, the more specialized this would be. This is because the added depth adds a lot more value to the company than just another generic programmer would. If interviewing a graphics programmer, I would expect him/her to write better shaders than I can, and maybe I can pick up a thing or two from them during the interview if I am lucky.

Smaller companies prefer general programmers because they can tackle many different tasks and generally solve the problem with a satisfactory result. They are the most cost-efficient way to get the game out of the door. This is what results in the difference in quality (and budget) between indie games and those coming out of the mega-corps.

Sometimes during the course of the interview, it will become apparent that your current skills do not match the niche they are trying to fill. At this point, it is a good idea to point this out and declare yourself unsuitable for this particular role. It would spare the interviewer the awkwardness of figuring out how to break it to you gently, and displays a maturity and clarity that will bear you in good stead when a position that does fit you opens up.


Cynics define passion as a direct measure of how little a company can get away with paying you to do the most amount of work. This is not incorrect. In the end, it all comes down to business. In a creative industry like game development, a heavily-self-motivated employee will be a thousand times more productive than somebody who’s just doing a job. Also, they would be less likely to leave at the first sign of green pastures, particularly in the middle of a project.

However, passion is much more than that. Why do you want that particular job in that particular company as opposed to anywhere else? What should the company do to keep you happy? What can the company do to make you deliriously ecstatic? If your answer is “More Money!”, I would recommend a job in the financial sector. Game development is not for you. Otherwise, being clear and truthful about what the company wants out of you, and what you want out of the company will make the relationship infinitely simpler and easier to manage.


The interview process is a two-way street. Both parties are trying to find out if there is a way for them to reasonably work together. Over-representing yourself is a common mistake as it murkifies the communication. (Besides, we know when you’re naughty!) I believe that for interviews at least, being honest and upfront (within the constraints of any NDAs) is the best policy. You don’t just want a job. You want a job that will make you happy, and will make your future employer happy to have you.

Here are some common interview questions. Can you tell which qualities they are trying to measure? (hint: it’s often more than one)

  • Why do you want to join our company?
  • Describe your proudest achievement to date.
  • What’s the biggest challenge you’ve encountered so far in your career? How did you overcome it?
  • What games do you play? What do you like about them?
  • What is your opinion of overtime work?
  • Please write some code that does this ridiculously simple task. What trade-offs did you make? How can you improve it?
  • What’s the difference between a vertex shader and a pixel shader?
  • You need to transport a dog, a cat and a mouse across a river. You can only carry one animal with you in your kayak at a time. You can’t leave the dog alone with the cat. You can’t leave the cat alone with the mouse. If you put the cat in the kayak more than twice, it will scratch your eyes out. What’s the minimum number of trips you need to make to get everybody over to the other side?
  • How would you deal with an unreasonable supervisor making stupid (in your opinion) decisions?
  • Have you any experience with Shaders? How about multithreading? Pathfinding? Networking? Physics? UI? Anything? Anything at all?
  • On a scale of 0 to 9, rate your C++

What is a Depth Buffer?

Posted: March 26th, 2014 | Author: | Filed under: Tutorials | 1 Comment »

How do you explain what depth-write and depth-test means to a non-technical artist or designer? The obvious answer is you search Google for a well-written introduction to the concept of depth buffers, and point them in direction. Somehow, my Google-fu failed this time, and I could not find an article without scary OpenGL code or mathematical formulae. So, here I am, writing this out. My public service for the year.

The Painter’s Algorithm

So let’s go back to the beginning when painting was a simple matter of putting brush to canvas. If a painter of yore were to draw the Success Kid meme, how would he go about doing it? It would probably go something like this:

Painters Algorithm

Painters Algorithm

You start with a blank canvas, draw in the background, followed by the kid, and finally the text. Notice that you are layering things on top of another from the back-most to the front-most. This method is called the Painter’s Algorithm. So why can’t computers do this too? Well, the answer is, they can! The only problem is that it is slow, because we are drawing a lot of pixels that won’t eventually make it to the screen. Consider the part of the background behind the kid. We can’t see it, but yet we are drawing it, only to overwrite it again later.

So how expensive is it really to draw a pixel? Consider the obviously high-poly, normal-mapped, specular and fresnel lit Success Kid model. There are two factors under your control that directly affects how long it takes to render it on screen. The first is your vertex cost, which is roughly the complexity of your vertex shader multiplied by the number of tris, verts or polys that make up your model. The second is the pixel cost, which is the complexity of your pixel shader multiplied by the number pixels the polygons are occupying on screen (including all the alphaed-out bits).

For better or for worse, the shiny stuff that all artists love (and therefore cost the most), usually reside in the pixel shader. Per-pixel lighting, normal maps, environment maps… they all severely skyrocket the price you pay for each pixel. That’s why for shiny stuff, we want to minimize the number of pixels that are drawn as much as possible, especially those that don’t need to be considered.

Depth Sorting

It didn’t take long for somebody to come up with the bright idea of drawing things in reverse order. If we draw the kid before the background, we wouldn’t have to draw the part of the background that is hidden (or if you want to sound real smart, occluded). So the draw sequence would go something like this:

Front to Back

Front to Back

So now before we draw each pixel, we just need to check and see if that pixel has already been drawn. If it is already there, we skip the pixel at proceed to the next one. As Farengar Secret-Fire would say, “Simplicity itself!”.

This is where the depth-buffer comes in. The depth buffer serves as a record of what pixel is drawn and what is not. So as we draw things on screen (i.e. the color buffer), we record down in the depth buffer that we have drawn to this pixel. This is called depth-write. The next time we want to draw to the same pixel, we check the depth buffer to see if the pixel has already been drawn to, and if so, don’t do it. This is called depth-test.

I know the question that’s burning a hole through your brain. “If it is simply a record of what pixel is drawn, why do we call it a depth buffer?”

What, it wasn’t? Well now it is, isn’t it? Before we answer that, let’s take a look at a problem we conveniently glossed over.

The Alpha Problem

Let’s revisit that approach with the full image:

Alpha Problems

Alpha Problems

Oops, do you see what happened there? Our “Success!” caption was drawn on a single quad, with all the transparent bits alphaed out. Since it is front-most, we draw that first. When we draw the kid, it sees that those pixels are already drawn to, albeit with zero alpha, and doesn’t draw to it, so we never get to see the pixels that are behind the space between the letters.

There are a number of ways to solve this problem. We could model each of the letters individually, so that the texture is fully opaque. The problem with that is that your vertex count increases a lot, especially if you want smooth curves. Also, it doesn’t handle the case of translucent pixels  (i.e. with partial alphas), that would still need information on what’s behind it. This might still work for some cases where the vertex overhead is not that big. If you can get away with this, do it!

Another way is alpha rejection. Whenever the pixel shader sees that what is about to be drawn has an alpha value below a certain threshold, don’t draw it! This solves the cost of having many vertices, but still doesn’t solve the translucency problem. This also ruins the day of some graphics chips (most notably, the PowerVR chip found in iPhones and some Android devices)  which optimize around the idea of not having pixels being rejected by the shader.

The ultimate solution is acknowledging that we don’t really have a clue of what to do, and go back to the Painter’s Algorithim. However, this time, we can be smarter about it. Divide all our objects into two groups—those with alpha, and those that are fully opaque.

First, we draw all the opaque geometry sorted front to back, making use of the depth buffer. However, instead of just storing a plain “Have we written to this pixel”, we store the depth value. (i.e. how far is this pixel away from the camera) After that, we draw all the alpha geometry from back to front, checking the depth buffer against the depth value of the pixel we are about to draw. If we find that we are drawing behind something that’s already drawn, don’t draw it because we know that it will be occluded by an opaque object. Otherwise, we would draw on top of it, blending with any pixel information that might already be present. Since we are drawing from back to front, there is no need to waste time writing to the depth buffer.

Opaque followed by Alpha

Opaque followed by Alpha

This leads us to the general rule:

  • For Opaque Geometry: enable depth-write, enable depth-test
  • For Alpha Geometry: disable depth-write, enable depth-test

You should also be aware of how to let the engine or game know what is opaque and what is not. In most editors, you can simply disable blending. This is really important. An opaque object with blending on will still look right, but will incur the cost of a transparent object.

As your astute mind would have observed, using a lot of overlapping alpha leads to copious amounts of overdraw (redrawing the same pixel). The more screen space this takes up, the worse it gets. Thusly, we finally arrive at the moral of the story.

Alpha is evil! Shun it like the cancer that it is!



Common Artifact 1: Intersecting Alpha Geometry

If you have two pieces of alpha geometry that intersect or interlock each other, which one do you render first? Whichever one you pick, the one that is rendered first would not have the pixel information of the other, so blending will never be accurate. Stop tormenting your poor programmer… there is nothing he can do! This is further proof that alpha is evil.

Also, if you do tricks that offset the origin of a model, which messes up the sort order, you might get the same artifact we saw with the “Success!” text above. Your programmer can solve this problem by selectively sorting it, but usually at a cost. Tricks are evil too!

Common Artifact 2: Z-fighting

Occasionally, when two pieces of geometry are very close together, you may either see them flicker or observe moving stripes on them. This is because you have hit the limits of the depth buffer. The depth buffer is not continuous. You can envision it as dividing the space between the near plane and far plane of the camera into slices. If the pixels you want to render are on the same slice, the poor GPU is going to get confused which is the one true pixel. There are a number of solutions

  • A 16-bit depth buffer would have 65,536 slices. A 24-bit buffer would have 16,777,216 slices. A 32-bit buffer would have 4,294,967,296 slices. So the obvious solution is to increase the size of the depth buffer. However, depending on the hardware, this is not always possible.
  • You could move the objects further apart, which is the most common sense thing to do. However, visually, it might look a little off.
  • Another trick is to adjust the camera so that the far-plane is closer to the nearplane . This will squish the slices closer together, making them thinner, and therefore decrease the chance of z-fighting.
  • You can move the offending objects closer to the camera.

That last one needs some explaining. If you have a purely orthographic camera (i.e. there is no perspective. Far away objects don’t appear smaller than closer objects), all slices are equally spaced from the near plane to the far plane. However, for perspective cameras, these slices are not born equal. Instead, they are bunched up close the near plane and get exponentially further apart as you approach the far plane. There are tricks that your programmer can do in the shader to reduce this effect, but as noted in the previous section—tricks are evil!

The Resource Management System

Posted: September 22nd, 2013 | Author: | Filed under: Hermit, Tutorials | 1 Comment »

Typically, resource systems are created as an afterthought. When the need comes to load or save some piece of data, a bunch of file io code is written and left to languish until it gets so convoluted that some unfortunate soul is tasked to go clean it up. This is sad because as one of the main backbones of the engine, any design misdemeanours will have a tremendous ripple effect throughout the whole code base. After having put considerable thought into it, I present the design of the resource management system for the newly christened Hermit Engine. (Hermit, because it is designed by lone wolf coders for lone wolf coders ;))

Anatomy of a Resource System

Before we dive in, let’s take a look at what the resource system needs to achieve. Much more goes into it than simply loading and saving files. The resource system will be responsible for loading up all assets that are needed by other systems (shaders, textures, scripts, audio, geometry, etc.). This could come from a variety of sources, be it a simple folder structure, compressed archives, a database, a network stream or any other data source. It needs to ensure that all resources are loaded only once, and are reused when possible so as to keep the memory footprint as small as possible.  Ideally, it would be able to load resources at the beginning of levels so that no file io or allocations would happen during the main gameplay phase as these may cause stutters in frame rate. It needs to identify the different types of resources so that they can be funnelled off to the appropriate subsystems for usage. Also, when requested resources are not found, it should be able to substitute in default resources instead of simply crashing.

For Hermit Engine’s resource system, we will lean on 3 main pillars: The basic resource type, the resource pack, and the resource pack loader.

Resource Types

The Achilles heel for most resource systems are the humans that use them. Because of humans, we need human readable resource names. We need unwieldy folder structures so that the fleshy brained can find their stuff. As a result, we end up with elaborate string libraries, hash tables, directory traversal, and other icky constructs to deal with their archaic naming conventions. Remember the ripple effect we mentioned earlier? To all this, Hermit says, “Screw humans!”

Resource ID structure

The Hermit’s resource comprises of 3 pieces of data. The first is the resource identifier. Naturally, this won’t be a string. Instead, it will be a 32 or 64 bit integer (depending on how many resources you have). The highest few bits of will represent the resource type. The next few will indicate which resource pack it came from. This will eliminate the need to store some sort of global identifier counter. The few after that will be a subtype that each system can define. For example, the shader system might want to differentiate between vertex, pixel and geometry shaders. After that, you simply have a running number. If this number happens to be zero, that would be your default resource. Default resources will always be available even in the absence of data because they are procedurally generated (remember the whole point of this engine?).

This will let you easily enumerate all the resource types with a simple enum. Need default geometry? Just request for RESOURCETYPE_GEOMETRY. Need to know what resource type a given resource type is? Just & it with a high-bit mask. Want to sort resources by resource type for more efficient loading? Just use a simple int-compare on the resource id. Everything becomes simpler once the foundation is laid.

The second piece of data that the resource would need is the source information needed to generate the resource. For shaders, this would be the uncompiled script. For geometry, this would be spline and fractal fields. The last piece of the puzzle would naturally be the compiled data. A shader id in the case of shaders or a VBO id in the case of geometry. The source data is what gets saved to disk. During the game’s execution, the source data is read in when a resource is loaded, dished off to whatever system in order to get the compiled data, and then jettisoned since it is no longer useful. If it was still useful for any reason, it would be part of the compiled data. In the case of an editor (you were planning on having one, right?), the source information is kept in memory so that it may be displayed, edited and used to regenerate the compiled data.

“But I’m a human, how will I ever find my stuff amongst all this binary gibberish?!”

The engine won’t sully itself with this since coddling humans is not its problem. However, the editor will maintain a two-way human_name<–>resourceid map so that when information is presented to the human, they get to see their precious strings. After all, it’s the editor’s job is to act as an interface between human and machine.

Resource Packs

Resource packs are what they sound like. They are responsible for grouping related assets together and treating them as a batch as far as loading and unloading goes. They provide the structure which forces the human to think closely about what’s in memory at any one time. For a simple level-based game, you would typically have a pack for all the common elements, and a pack for each level. As you move from level to level, you unload old packs and load new ones. This guarantees that you won’t be doing any file io or memory allocations (with regard to resource handling anyway) that may cause stutters during the main gameplay. Furthermore, it accomplishes this without reference counting or any other clunky mechanism.

“But wait! I’m making a super huge MMO WoW-killer that needs to stream in assets on the fly. I can’t do things level-based!”

First off, if you are a lone-wolf programmer making a “super huge MMO”, it would be my duty to inform you that you are bat-shit crazy and should turn back while you still can. That said, if you still have your mind set on it, all you need are smaller packs. Your packs would contain all resources related to that particular entity: it’s geometry, script, audio, etc. This would handle the problem of making sure all assets are present before trying to display the object. Shared resources can be stored in zone-wide or global packs.

Regardless of how it’s done, the smart designer will keep most if not all dependencies within a pack so that they get loaded/unloaded together. Once everything is viewed modularly in terms of packs, inter-dependency management should become pretty easy. Furthermore, since we are lone-wolf programmers, we only need the modicum of support for stupid design—assert(IsDesignerBeingStupid()).

Pack Loaders

This is the part that does the file io. Its interface would only have three main methods: load a pack, unload a pack, and save a pack. You can have different loaders to cater to different needs. Here are a few examples:

A lot of people seem to like keeping all their assets in version control for some reason or other. You can have the loader write out to a directory structure which can then be checked in. If you really want to, you can also make use of the editor’s mapping to write them out into human filenames rather than just plain ids, complete with convoluted directory structure.

One of the better places to store data for editing would be in a database. This could be as simplistic SQLite or as elaborate as Oracle. This is handy for synchronization of assets between multiple machines, DLC management, statistics gathering, and those shady mass-edit operations that no self-respecting developer would ever admit to doing. The danger here is that a corrupt database could ruin your day.

For runtime, most resource packs would end up in compressed archives. The pack nature, makes this extremely easy. Instead of having to compress each resource individually, you simply compress the whole pack. The header would simply comprise of resourceid<–>offset_location pairs. When loading, you load the whole pack, decompress the whole thing, distribute offset pointers to each of the resource consumers to generate compiled data. After that, just free all the source data at once.

If you really want to, you could even have loader that reads off a network stream for dynamic runtime sharing. However, being the HERMIT engine, we network and share with nobody.

Here we go again…

Posted: July 14th, 2013 | Author: | Filed under: Hermit, Projects, Tutorials | 1 Comment »

Once upon a time, there was a project called Darwena. This was to be a game engine built from the ground up with the aim of reducing our reliance on that other group of people that are kinda crucial to game development. (the artists!) That project started up fairly quickly, with cross-platform input and rendering implemented, even a little text. However, that project died just as quickly when I joined the ranks of Gameloft. Many factors contributed to that. There was the fear that as a generic employee in a generic evil empire, my work would simply get assimilated into the host. There was the lack of time and energy due to the crazy scheduling of a freshly started-up studio (albeit under the umbrella of said evil empire). Most of all, I start learning a whole lot and discovered (i.e. had my nose pushed into, gently) a lot of concepts that made my previous work look like the inane doodles of a three-year-old. (no offense to three-year-olds, you rock!)

Well, most of that has changed now. The wild schedule has been tamed. The powers that be have given their blessing to my pursuit of non-commercial project work. Most of all, I’ve learned enough that I just need to write something to congeal all those disparate concepts into a contiguous body of knowledge. Also, I now have a conducive work area in my apartment, all ready to get those creative juices flowing!

Design Goals

  • It will be cross-platform across Windows, OSX and Linux, though most work will be done in Linux. (because we can!)
  • It will be multi-threaded, and data-oriented, and component based. (I was told that component models tend to not work well. Time to test that!)
  • It will emphasize procedural generation. However, we’re not targeting the sort of procedural generation that we find in the demoscene which creates a lot of pretty stuff with very small code and assets. Instead, we will be generating assets non-realtime (unless things go exceedingly well), mutating and splicing them with genetic algorithms using human input as the fitness function, in order to produce not-so-abstract-looking visual and audio assets. This is not a new concept, but it sounds like fun and worth exploring.

Target Features

  • There will be an editor for creating content, possibly including game levels.
  • Text input and editing will be highly emphasized as the main means of control. The target users are programmers, so there’s no fancy drag-and-drop-connect-the-dots nonsense  going on. Command prompt ftw!
  • There will be a scripting language for fast prototyping, most likely AngelScript because the C++-like syntax might be easier to copy/paste into actual finalized code. Also, the exposed debugging interface would allow us to do breakpoints, watches, etc. within the scripts through our custom interface.
  • The first asset generation milestone would be creating X different assets with different mutations for the human to select and filter through.
  • The second asset generation milestone would allow splicing of two or more assets into one.
  • The third asset generation milestone would be finer-grained gene manipulation.

Basic Architecture

Despite all the fancy stuff, a game engine is still a game engine and will need the same basic things. Our design will emphasize mulththreading while trying to steer away from excessive mutex use or double-buffering of data. Instead, we apply the data-oriented paradigm and look to transform data from one form to another with each module. On a very high level, it may look something like this:

High Level Model

The Simulation module is basically a huge collection of cascaded step functions. This would can be thought of as a physics process. For example, you use acceleration to step velocity. You then use velocity to step position. However, this could apply to HSV color, audio frequency, emotional state, etc. This is very likely to eventually comprise of many submodules to handle different types of algorithmic relationships. I’ve still to sort this out in my head.

The Steering module takes the output from the simulation in order to generate control inputs for the next simulation step. It can also take additional input from its internal stored state or from external sources like human or network interfaces.

The Rendering module simply takes the output from the simulation and splats it on screen, speakers or whatever other output device so that we humans can perceive what’s going on in there. In a sense, this is just a slightly twisted MVC model.

The whole pipeline will run as if it were in a single thread but each of the modules that execute will be threaded. This will allow us to scale dynamically as more processors become available, and will also let us collapse back down into a single-threaded system for easy debugging. Each of the modules or submodules will load balance by simply dividing the number of tasks by the number of processors. We can do this as long as all the submodules are homogeneous in the sense that each task will take about the same amount of time to execute. The only exception to this is rendering which generally likes to run from the same thread, and is thus run in parallel with steering since they operate off the same immutable data. This may cause bubbles in the pipeline, but that’s another worry for another day.

Where are we now?

Currently, we have rendering up on all three platforms, as well as a modicum of input processing. Project/makefiles are generated through CMake. This includes the main engine library project, as well as a project for the editor that runs on the engine.

The next stop is the resource system. Stay tuned!

C vs C++ for Game Engine Code

Posted: May 12th, 2013 | Author: | Filed under: Soapbox, Tutorials | 2 Comments »

Until a year or so ago, I was a C++ purist. This was fair, because as a game programmer, generally you would want your code in C++ 99% of the time. The object-oriented style it promotes helps a lot in scalability and code management. Any super-time-critical code would mostly be handled by the engine anyway. Also, as a superset of C, there’s technically no reason why it can’t do everything that C can do. However, coming across game engine code written in C, most notably in HGE, Orx and RK, was an eye opener. There is a poetic simplicity and succinctness to the code. Just reading the code alone makes you instinctively feel that it should be faster, even if just by a smidgen. Much of this past year was spent ping-ponging between the two concepts, trying to figure out which one I like better.

Why C++ instead of C


Namespaces are wonderful to have. Ultimately all you want to do is prevent naming collisions. C gets around this by generally having absurdly long function names like  BlahBlahEngine_GetIrritatingObject() which ultimately makes the code harder to read. In C++ GetIrritatingObject() is simple and to the point.

Operator overrides are also very nice when used sensibly. Case in point: which is easier to read?

//C Code
Vector a, b, c, d;
Vector_Set(a, 1, 2, 3);
Vector_Set(b, 3, 4, 5);
Vector_Set(c, Vector_Add(a, b));
Vector_Set(d, Vector_Mul(c, 3));

// C++ Code
Vector a(1, 2, 3);
Vector b(3, 4, 5);
Vector d = (a + b) * 3;

This readability can also be a detriment if badly used, because the programmer is not made aware of the cost of each operation. This, however, can be circumvented by simply having more alert programmers! 😀


Resource acquisition is initialization. This is one of the little tricks used by smart pointer classes to ensure that memory is cleaned up after them. In general, I don’t like smart pointers. Sure, they are a convenient way to manage memory, but abuse of them tends to lead to bad architecture (more on this later). However, I do like to write almost all my classes using the RAII paradigm. While it has the side-benefit of making your code exception-safe, it also encourages what I consider to be good architecture—in that everything is allocated at once (and therefore most likely in the same area of memory), and no strange allocation happens through different code paths.

C generally uses Init and Deinit functions. These are not exception-safe (i use the term liberally), so if your application gets shut down unexpectedly half-way, it will most likely leak memory. Some OS’s might sandbox you, but not all do.


Love or hate them, they are still useful for all sorts of things, most notably container classes. Their shortcomings are well documented—hard to debug, inconsistency between compiler implementations, etc. I equate them to C macros, but with more use cases and potential problems. Overall, they are a net gain, but I would still shy away from STL/Boost unless I know their exact implementation. I tend to use these more for my own custom classes, but the moment ugly template magic starts happening, I take pretty much any other alternative I can.

Why C instead of C++

Object vs Data Orientation

We all know object oriented code. After all, it is what was promoted when we were in school, even though at the time, we all secretly wanted to write everything in one big function and be done with it. The structure it enforces produces very scalable and flexible code. It also had the benefit of localizing all data for an object in the same spot of memory, which was and still generally is a good thing.

Data oriented code is the new (relatively speaking) paradigm of envisioning code modules as data manipulation. You take in an input and produce an output. By cleverly organizing your data in a somewhat unobject-like way, you can have the same operation acting on mass amounts of data that happen to be all located sequentially next to each other. As a wise man once summarized, you organize your data as a struct of arrays, rather than an array of structs. The benefit of this is that your data is more cache-optimized, and also lends itself more easily to parallel processing.

Now, it should be noted that both C and C++ are capable of implementing both paradigms, even at the same time (they are not mutually exclusive). However, by its nature, C++ nudges you to think in a more object-oriented way. C doesn’t nudge you at all, and has absolutely no qualms to letting you chop off your own foot if you so desire. It is, however, somewhat easier to head down the data-oriented route without the uneasy feeling that you are doing something naughty. I found the best way to not feel guilty is simply to stop feeling guilty.

The Singleton Problem

This is one of my biggest peeves with object-oriented languages. Let’s say you want to write an audio module, with functions like PlaySound(soundid) that everybody and their grandmother wants to call at some point. OO logic dictates that there should be some kind of SoundManager class, and since there should be only one of them, the Singleton pattern is followed. Thus every time you want to play a sound, you would do something like SoundManager::GetSingletonPtr()->PlaySound(soundid). Depending on your implementation, GetSingletonPtr could contain a branch to check if the singleton exists. Also, managing when your singleton object gets allocated, intiated, etc. is a pain. Of course, there are at least two dozen singleton variants that get around this in various ways. But that’s a lot of reading…

C, on the other hand, is made for modules. By embracing the “evilness” of a globally defined function, one can simply call SoundManager_PlaySound(soundid). Firstly, it’s as simple as just calling a function.  In fact, it is just calling a function! Secondly, your module would have already been inited and deinited at startup and shutdown giving you unequivocal control over the lifespan of the SoundManager.

I get around this by embracing the “evilness” of global objects. All these objects’ lifetimes are managed by a single RAII manager class, but pointers to them are available in the global scope. SOUNDMANAGER->PlaySound(soundid) is not bad, though still not as “clean” as the C way.

Convolution in the Name of Safety

C/C++ was generally regarded as one step up from assembly. When you want finer-grain control over what happens where, without explicitly dictating each opcode, they would be your languages of choice. Lately, C++ has been moving away from that shaky ground. The additional functionality like smart pointers, dynamic casts, templatized algorithms and libraries, are all there in essence to prevent you from screwing yourself over. The cost to that is that you are removed further from implementation details. When I malloc() something, I get a chunk of memory, When I new() something, I get an object, with it’s constructor called, and whatever happens in there. This is not a bad thing, and probably improves the quality of life for C++ programmers in general. The thing is, if you are going in this direction, why not use a language like Java or C# which was designed for this role in the first place?

Instead of writing code to prevent programmers from doing stupid things, why not have smart programmers that won’t do stupid things in the first place? Game programmers, especially engine programmers, are control freaks when it comes to what their code does. Sometimes, we do want to shoot our own foot off (because it’s amusing?), so let us and stop asking questions!

I get around this by simply not using the C++ features I don’t like. Yes, I use new and delete, since I know what they do (as long as they are my own classes). I use sprintf instead of stringstreams which are just clunky. I use arrays instead of vectors. I use the C string manipulation functions or simply roll my own instead of std::string. I use C casts instead because it is briefer, and architect so I won’t be in a position to cast the wrong pointer. That last one is particular poignant. With all the convenience tools taken out, you are forced to look at your architecture, design it sensibly, and generally be more aware of the different interactions. (The same can be said for avoiding VisualAssist, but that’s a rant for another day!)


Shaders 101

Posted: July 23rd, 2011 | Author: | Filed under: Tutorials | No Comments »

As indie or casual game developers, we seldom get the chance to play with shaders. The reason why is that there simply is no need to get ourselves involved in all that “stuff”. The engines that we work with come with most of the stock shaders that we need, so all that is required is a basic understanding of what they are and what they do. Kind of like suntan lotion, you don’t question too deeply how it works. You just slap it on.

Recently however, I have had the occasion to dabble in shaders and found that it is really not all that complicated. So here’s a little overview to get you started.

The Triumvirate

Some time ago, shaders were written purely in assembly, much like how normal programs were written a longer time ago. The next natural step in evolution was of course to spawn a set of high level languages to deal with the complexity. HLSL was coined by the folks at Microsoft. It works in tandem with DirectX, meaning it’ll run cross-platform as long as by cross-platform, you mean Windows and Xbox. GLSL was spawned by the hippie open-source community and works as a direct extension of OpenGL, so it will run everywhere else. The folks at NVidia came up with CG which doesn’t really run anywhere but instead compiles into both HLSL and GLSL, presumably saving time writing shaders in both languages.

Structurally, they are all the same. It is only a matter of syntax and code rearrangement, if you ignore some of the more esoteric features of each language. All of them give you the power to manipulate matrices and vectors as easily as you would mere integers in traditional programming. They are also designed to be (potentially at least) massively parallel so more things can be done faster, which is generally a good thing. The price, however, is limitations in what you can actually do within your shader.

The Pipeline

The programmable pipeline as they like to call it, differentiates itself from the more traditional fixed pipeline in the sense that it is, well, programmable. This gives the programmer (or artist, if the dirth of shader generation tools keep growing) more power over the rendering process.

The old way of doing things was that you’d fill up vertex buffers with vertices, index bufferes with indices, strategically position a few lights and define a few global parameters. After you have everything configured sweetly, hey presto, there’s your pretty picture! The main errors students commit using this traditional workflow are trivial things like… do you have a light in the scene? Is your camera pointed the right way? Is the object so small that it is occupying less than one pixel on screen? Ahh… life was simple back then!

Nowadays, you start of with a set of vertices. These could come directly from meshes or they could be generated from geometry shaders (I don’t know much about geometry shaders so I’m not going to say much about them). It doesn’t matter. Each and every vertex gets run through your vertex shader program. This happens four, eight or however many channels your graphics card has, at a time. As a result, you  can’t get information on the other vertices (unless you are especially clever and sneaky about it), but you can pretty much do whatever you want with the vertex you have. Change its position, its color, its normal or any other of its attributes. Totally up to you. Of course, with great power comes great responsibility, and with great responsibility comes greater potential to f*** things up.

Once all the vertex processing is done, your vertices get rasterized. Usually, by that, we mean that they get transformed to 2D screen coordinates, but that isn’t exactly true. It is the job of your vertex shader to transform the vertices into 2D coordinates (it’s just a matrix multiplication, not hard). What the rasterizer does is it goes through the pixels between your vertices and interpolates the values. So if you have one vertex that is red and another that is white, all the pixels in between will be varying shades of pink. This doesn’t just happen for your colors, but also for normals and whatever other attributes you may have chosen to endow your vertices with.

Each of these pixels is then fed into your pixel (if you are from Microsoft) or fragment (if you are a hippie) shader for further processing. There, you can do a whole bunch of other complex or simple operations, simply to figure out what color you want to paint that particular pixel. As per your vertex shader, you only have access to that one pixel throughout your shader, and the pixels themselves get pushed through n channels at a time, depending on how much your vendor crippled your graphics card via firmware.

Shader Anatomy

Finally we get to the code part of things. I’m going to compare CG and GLSL, mainly because CG and HLSL code look pretty similar, and CG has less letters.

Data types

There are four general data types to be found in a shader language.

  • There’s the basic floating point number, which all three languages have aptly named “float”.
  • There’s the vector of varying sizes, called float2, float3 and float4 in CG, or vec2, vec3 and vec4 in GLSL
  • There’s the matrix of varying sizes, called float2x2, float3x3, float4x4 in CG and mat2, mat3 and mat4 in GLSL
  • There are textures of varying dimensions, called texture2D, texture3D and texture4D in CG and sampler2D, sampler3D and sampler4D in GLSL (don’t let 4D textures blow your mind!) These are essentially the equivalent of lookup tables.

Of course, there are a lot more actual data types than these, most having to do with fixed-point math or irregularly shaped matrices. You also have bools and stuff, but you can figure those out on your own.

Types of Variables

As you may or may not have gathered from the section on pipelines, much of the idea behind shaders is about data flow and how it trickles down into the pixels. As a result, we have four types of data.

Uniform variables are stuff that you just sorta shove into the shader. How they get in is up to you. Some have them as part of the exporter from Max or Maya. Some have them built into the level editor. As long as you feed your shader before you run it, all will be well. Here’s your world matrix, a handful of numbers that should mean something and a few vectors for good measure. glhf! CG embraces the evil and defines these as simply global variables within the shader code. GLSL is more circumspect and demands that you use the keyword “uniform” to denote them. GLSL also has a whole bunch of predefined uniform variables just to keep everybody on the same page.

float time;
float amplitude;
float frequency;
float ambient;
float4 lightdir;
texture2D tex0;
uniform float time;
uniform float amplitude;
uniform float frequency;
uniform float ambient;
uniform vec4 lightdir;
uniform sampler2D tex0;

Uniform variables aren’t your only form of inputs. There are also vertex attributes. These are attributes that are defined as part of your vertex data structure. They can include mundane things like color, normals and uv coordinates, as well as customized data that only has significance to your skillfully crafted shader. How do you generate these vertex attributes? They can either be pre-calculated, or more often than not, are “painted” in by the 3D artist.

struct a2v
float4 position : POSITION;
float4 color : COLOR0;
float4 normal : NORMAL;
float2 texcoord: TEXCOORD0;
//attribute vec4 gl_Vertex already defined
//attribute vec4 gl_Color already defined
//attribute vec4 gl_Normal already defined
//attribute vec2 gl_MultiTexCoord0 already defined

In CG, all you do is define a struct and dump whatever vertex information you want into it. This will later be fed into your vertex shader. You can call your attributes whatever you like, but may need to tag them for deciphering purposes. For GLSL, you use the keyword “varying”. Like uniform variables, GLSL has a lot of predefined vertex attributes to cover the usual suspects.

Varying variables are what your vertex shader spits out, and is then fed into your pixel/fragment shader. Remember that by the time they reach the pixel shader, you will more often than not be working on interpolated data. In CG, you define another struct to hold this intermediate information. In GLSL, you use the keyword “varying”. As always, there are a bunch of predefined varying variables.

struct v2p
float4 position : POSITION
float4 color : COLOR0;
float4 normal : NORMAL;
float2 texcoord : TEXCOORD0
//varying vec4 gl_Position already defined
//varying vec4 gl_FrontColor already defined
varying vec4 normal;
//varying vec2 gl_TexCoord0 already defined


Finally, you have the output from your pixel shader. In CG this is, you guessed it, another struct. In GLSL, there’s no keyword. You just use the predefined variables, namely gl_FragColor.

struct p2f
float4 color : COLOR0;
// gl_FragColor already defined

The Shader Functions

After you have defined the data that you are going to be playing with, all that is left is to write the shaders themselves. Each shader is a self-contained function. In GLSL, your vertex and fragment shaders are usually in different files, and the entry point for each one is called “main()”. For CG, however, you can have multiple shaders in the same file and call them whatever you want. Whatever the case, during the actual render process, one program, comprising of exactly one vertex shader and one pixel shader, will be run per mesh per pass.

Here is a vertex shader that bounces the mesh around a bit.

v2p mainV(a2v IN)
v2p OUT;
OUT.normal = IN.normal;
OUT.color = IN.color;
OUT.position = IN.position + amplitude * sin(time*frequency);
OUT.texcoord = IN.texcoord;
return OUT;
void main()
normal = gl_Normal;
gl_FrontColor = gl_Color;
vec4 newpos = gl_Vertex + amplitude * sin(time * frequency);
gl_Position = gl_ModelViewProjectionMatrix * newpos;
gl_TexCoord0 = gl_MultiTexCoord0;

And here’s a pixel shader that computes lighting and figures out what color to render based on the supplied texture, vertex color, directional light and ambient lighting. Note that it would have been more efficient to calculate the texture color as well as lighting in the vertex shader rather than here, so that it is calculated per vertex rather than per pixel. Food for thought… think of all the neat hacks you can do! Also think of all the side-effects!

p2f mainF(v2p IN)
p2f OUT;
float NdotL = dot(IN.normal, lightdir);
OUT.color = IN.color * tex2D(tex0, IN.texcoord) * NdotL + ambient;
return OUT;
void main()
float NdotL = dot(normal, lightdir);
gl_FragColor = gl_FrontColor * texture2D(tex0, gl_TexCoord0) * NdotL + ambient;


I wandered lonely as a cloud

Posted: February 19th, 2011 | Author: | Filed under: Tutorials | No Comments »

I wandered lonely as a cloud
That floats on high o’er vales and hills,
When all at once I saw a crowd,
A host, of golden daffodils;
Beside the lake, beneath the trees,
Fluttering and dancing in the breeze.

Continuous as the stars that shine
And twinkle on the milky way,
They stretched in never-ending line
Along the margin of a bay:
Ten thousand saw I at a glance,
Tossing their heads in sprightly dance.

The waves beside them danced, but they
Out-did the sparkling leaves in glee;
A poet could not be but gay,
In such a jocund company!
I gazed—and gazed—but little thought
What wealth the show to me had brought:

For oft, when on my couch I lie
In vacant or in pensive mood,
They flash upon that inward eye
Which is the bliss of solitude;
And then my heart with pleasure fills,
And dances with the daffodils.

– Willam Wordsworth

Put it on the cloud!  The not-so-latest trend of web-hosting has finally caught up to me.  Today, however, we’re going to take a quick overview of Amazon’s Web Services (AWS) so that we can dream of the possibilities it may bring to our own development projects.  AWS is, in fact, quite well used by some of the larger casual game developers like Zynga to manage their large user base.

Amazon provides its services not as one, but rather several packages. This can be is daunting to a first-time user.  However, upon closer inspection, we don’t really need to know about all the nitty gritty details. Once you activate one service, all the other dependencies are automatically bundled in, saving you from the headache.

Computing Power

This is the primary service we are concerned about, Amazon Elastic Compute Cloud (EC2). Amazon has a large number of servers that can provide a tremendous amount of computing power. What EC2 does is create a virtual machine instance on this server farm, on which you can basically run whatever it is you want. You get billed on the amount of CPU time you consume. So the more intensive your tasks, the more you pay.

EC2 instances can easily be set up via a web-based control panel called the AWS Mangement Console.  There are a variety of different types of instances that you can create, depending on your expected computing load, ranging from micro to large. You can create as many instances as you want to, as long as you have the $$$ to back it up. Amazon provides a set of images that you can have pre-installed on the newly created instance. It boils down to a choice between Amazon’s version of Linux, Suse, and various flavors of Windows. I went with Amazon Linux, since that was free and I’m cheap.

When I created the instance, it was up within about 30 seconds, with a bare bones OS install.  It handily creates an ssh key-pair for you, so you can readily ssh into your new instance via Amazon’s public DNS server.  Adding packages was easy as you simply use yum. Unlike setting up your own Linux environment, installing packages is pretty fast since they are all hosted on the same server farm. I just installed gcc and did the obligatory “Hello World”. Yup, it works!

You can assign up to 5 elastic IPs to your instances. By elastic IPs, we mean public IPv4 addresses. These come free of charge, and will stay constant as long as your instance is up and running. However, if you have an elastic IP attached to a stopped instance for more than an hour, they start charging you for it. Kind of like how those buffet steamboat places used to charge you for whatever leftover food you have. It discourages waste. Amazon does not provide domain name services, so you will need an external provider if you want a fancy non-numeric URL.

Once you have server configured, you can in theory, save the configuration as an image or AMI. This allows you to then spawn multiple instances with the same configuration if load gets a bit high.

Data Storage

Your EC2 instances come with some space for you they call ephemeral storage. What that rather long word means is that once that instance is shut down, all the data on that instance is gone. The purpose of that storage is more for run-time operation rather than long term operation.

What you can do is use something called Elastic Block Store (yes, they really like the word “Elastic”) or EBS. This is simply storage space. Each EBS can be attached to your instance and mounted as a volume, providing you with persistent storage. What’s more, you can image your data and create multiple snapshots for use as backup, or for other instances to serve up data. Via Amazon’s Simple Storage Service (S3), this data can be transmitted between Amazon’s four data centers (2 in the US, 1 in Europe, 1 in Singapore).

Load Balancing

Amazon provides load balancing via… you guessed it – Elastic Load Balancing. Unfortunately for us game developers, it does not automagically load-balance your homebrew MMO. It is instead, more catered for web servers. So if you are writing a PHP-based MMO, more power to you! If not, you will have to architect your load-balancing mechanism by yourself. What this means is that your server code will need to be written atomically enough so that multiple instances of your code can be run concurrently on different virtual machines. In addition, you will need to write load balancing gateways that dole out incoming traffic to the free servers.

To aid you a bit, Amazon Cloudwatch provides you with data such as CPU and Memory consumption so that you can gauge when you need to spawn additional instances. 3rd party vendors like Rightscale provide additional paid services to enhance scalability by allowing you to rapidly spawn a large number of instances from pre-written scripts, even allowing dynamic resizing based on server load.


Amazon Web Services are indeed fairly easy to use. The scalability means that you can transit from a development environment to a deployed one with relative ease. Signing up was fairly hassle free. Just provide your credit card and telephone number and they will verify you. Given that the first 750 hours of compute time on a micro instance are given to you free of charge, there’s no reason not to sign up and play around with it yourself. However, once again as we oft do learn, it is no silver bullet. Making your game truly scalable still heavily rests on the shoulders of the developer. All Amazon does is provide the infrastructure.

Nerding Out The Windows Console

Posted: February 3rd, 2011 | Author: | Filed under: Tutorials | No Comments »
PuttyCyg to the Rescue!

PuttyCyg to the Rescue!

So you bought a Mac, and oohed and ahhed at the graphics and the user interface (or maybe you hated it, I don’t really care).  Then the inner nerd called to you and you started messing around with the terminal, loving the linux-like console and enjoying the power that a real OS command line gives you. Maybe you even went so far as to customize the look of your console, perhaps installing Visor in the process for a beautiful full-screen terminal. Ahh, how we love our text.

Then you turn back to your Windows 7 machine and oh how loathsome it is to work with now! MS Dos Prompt just does not measure up. Even their new-fangled PowerShell does not satisfy. We could perhaps forgive this if we could at least get it to run in full screen. But woe is you as  you realize they took out that full-screen mode feature when they moved from Windows XP to Windows Vista. So what’s a poor nerd to do?

Well, if you are a true nerd, you’ve probably heard of Cygwin. Back in the day, it was a way to run a linux distro on a virtual machine within Windows itself. Comes with X11 and all those fancy goodies. Well apparently, they had a change in philosophy when I wasn’t looking and it now comes with a minimal install. You have to choose the packages that go into your build, making it leaner and meaner from the get-go. So yes, go forth and download that. Additional packages to install would be inetutils (which gives you ftp and and stuff like that), as well as ssh (why Windows doesn’t have this natively is beyond me). If it so tickles you, you can get good stuff like gcc and automake as well. And of course, we can never forget good old Vim. If you’re an Emacs fan, that’s available too.  And if you are not quite so nerdy, there’s always stuff like Nano.

So you installed Cygwin and got it running and…UGHGHHH!!! You’re still stuck with that same lousy non-fullscreenable command prompt, albeit with Unixy power. Well, not to worry. Go and get a special version of Putty (this is the program that windows people use to overcome their ssh deficiency) called PuttyCyg.  This will allow you to start up a session in… you guessed it, your own Cygwin instance! Also, unlike the command prompt, you can maximize, or even full-screen it! Amazing!

An interesting side-effect is that since we installed the ssh package, we can ssh directly from our command window, thus voiding the original functionality of Putty. An irony that all true nerds can appreciate. To streamline the whole process, you can alter the windows shortcut to Putty, and feed in a command-line parameter to get it to start up your Cygwin session on the click of a button!

All this seems like an overkill if you consider my original intention. All I wanted was a full-screen Vim. Sure, there’s gVim, but it sucks in the full-screen department as depending on your font size, it’s not quite full screen. It also comes with fancy schmancy mouse and gui stuff that makes it quite… unVimlike.  So yes, I brought out the sledgehammer to squish the itty-bitty ant, and yay! It’s dead! Mission accomplished.

Programming, How To

Posted: January 11th, 2011 | Author: | Filed under: Tutorials | 5 Comments »
Problems in Programming

Problems in Programming

Good programmers, especially game programmers have this mystique about them.  They seem to be able to juggle ginormous amounts of machine gibberish and cobble it into an application or game that does all sorts of wonderful things, almost as if it has a life of its own. People think that they must be really really smart, and for the most part, they let them think that. Hey, if you can’t be handsome or muscular, at least you can be clever, right?

What they told us

Most courses introduce programming as a set of instructions that the machine follows. “Look, the computer looks for this keyword called ‘main’, then it does whatever you put in between the brackets.” And thus they are introduced to the wonder that is… “Hello World!” Hey look! you can make a couple of words appear on screen, therefore you are making progress! Wonderful! After that, they are introduced to constructs like branches and loops, maybe even functions and classes. The programs the student writes becomes more complex until they get stuck.

“Yeah, I get how these programming constructs work, but how do I put them together to make the program do what I want?” At this stage, they get taught UML. Yeah, this is what we call software engineering, where we plan out how the program will work by drawing pretty pictures! This gives rise to two new problems. First, how the hell do you know if your pretty picture is correct, given that you could draw it any number of ways and still be a somewhat decent representation? And after that, how do you translate it into actual usable code?

This is when they get introduced to design patterns. Hey look! These are like standard answers thought up by some four very smart people? How do we know there are only four people? They are called the Gang of Four, duh! Come see their voluminous tome of knowledge! For each problem that you are trying to solve, find the pattern that fits and plug it in. Hey presto! We end up with a group of programmers who think that programming is really really hard, cobbling programs together out of disparate pieces of other peoples’ code and tweaking the variables, hoping that everything will magically come together.

Take a deep breath…

and forget all that for now. Let’s go back to the basics. What is a programming language? This is the medium through which you direct the computer to do what you want. Plain and simple. It comprises of two parts, the “Programming” and the “Language”. Let’s go through the easiest part first.


Language is exactly what it sounds like. It’s just a different way of speaking/writing. It still does the same thing, which is conveying information from one party to another. There is no difference between a programming language and a normal language like Japanese or English, except that it is much much easier to learn. Why is it easier to learn? The vocabulary many times smaller than a traditional language. For example, if you want to understand C, all you need to understand are less than a hundred words. Compare that to the 171,476 (as listed in Oxford) words in common use for the English language.

If you want to learn Java, all you have to figure out is what words correspond to what you are used to using in C, and voila, you are a Java programmer! In fact the only time you find real major differences is when they don’t support certain features. These however are easy to categorise.

  1. Is it a generic programming language? That means you can use all the good stuff like templates to get around data types. (e.g. C++, Java)
  2. Is it an object-oriented programming language? That means you can use classes and objects. (e.g. Python)
  3. Is it a procedural language? That means you can use functions. (e.g. C)
  4. If it is none of the above, you are probably coding in assembly and have to conceptualize all those nice features by your onesome.

Whatever the case, the art of using a language is merely translating from your native tongue (which I presume is English since you are reading this), into the programming language of your choice. What’s that stuff that was in your native tongue that you have to translate? Why, that is the instructions you wanted to give the computer to tell it what to do, and this stems from the next big part… the “Programming” part.


Now that you know how to express yourself properly, you now need to tell the computer how to do stuff. Unfortunately, in spite of decades of technological progress that boasts of multi-core processors and terabytes of memory, computers are still as dumb as a brick. You need to explicitly tell them what to do each and every single step of the way. Unfortunately, due to our poor conditioning thanks to our exposure to ambiguous languages like English, most people struggle in drilling down to the absolute steps that are necessary to accomplish the task. It is, however, not a complicated process.

It starts of with a problem. (If you didn’t have a problem to solve, why are you writing a program? For fun? Blasphemy!). You take this problem, and you break it down into the steps you need to solve it. Each step then in itself, poses a problem. So you take all of those, and break those down, until the instructions become atomic enough for the computer to execute. That sounds a bit abstract so lets make an analogy.

Assume that you are on the couch, watching TV, when you get the overwhelming urge for a beer. So there you have a problem, need a beer. Let’s further assume that you, as the thinking programmer, are the brain, and you have instruct the stupid non-thinking computer, which is the body. So being so smart, you say, “Go to the kitchen, get the beer from the fridge and drink it.”. The body goes, “Go?” So you have to clarify, “Get up, turn right, walk two steps, turn right again, walk five steps, then turn left.” To which the body haughtily replies, “Get up?” So you have to go, “Flex both quadriceps, while pushing arms forward to maintain balance, until fully extended and upright.” Maybe this is something the body is finally capable of doing, so hurrah! Now you can go through the process of instructing it how to walk…

If you are lucky, some guy in the past might have already figured out that walking is a good ability to have. So he compiled as set of instructions on how to walk, which you can then feed to the body, merely having to tell it where to walk to. Well, isn’t that easier! These pre-made instruction packs, commonly called APIs or SDKs save you from having to laboriously go through each and every minute detail. However, before you use these instruction packs, buyer beware! The walk might not be perfect… it might have a limp! Or it might be only good for the first ten steps. Or it might leave you prone to falling down on certain terrain. Always read the fine print and test it to make sure it does what you think it is doing and be aware of any side effects.

So now you have the gist of how to program, you might find yourself encountering a couple of problems. The first is that you can’t figure out how to properly break down a problem or part of a problem. The second, particularly for game programmers, is that your set of instructions is so long and convoluted that the machine simply can’t execute it fast enough. Regardless of which, programmers tend to apply the same solution… “Think harder!”.  So you end up with these zombie programmers staring mindlessly at the screen, wracking their brains at the problem till their heads hurt and they slowly lose their minds.

The secret that eludes these programmers is that it isn’t matter of finding a solution. It is instead perceiving the problem. That’s right, it’s a matter of perception! Oh yes I see you, the artist giggling in the corner about our silly programmer post. Indeed, the artists have known these for years. Before an artist learns to draw, he is first taught to see. Once he can perceive the objects in our world in the correct way, he can apply it to canvas. Similarly, depending on how you perceive the problem, different solutions will become apparent. How do you change your perception? You have to identify and challenge your presuppositions. Tear down the things that you always take for granted and ask what if? Ask yourself what the problem is really about, and what else can it be about instead? The first few times, this can be really hard, but as you go on, it gets easier because *gasp* you are becoming a rational creature! And that is really difficult for a human being.

Good programmers take this concept and push it even further. Even when they are not stuck, they will view the problem from several different angles, and pick the best and simplest solutions that present themselves. That is why it seems that their code is always ten times shorter than yours, simpler, and faster to execute. The more angles they cover, the better pick they will have, and the less code they will eventually have to write. This is why when writing code, 90% of your time should be spent thinking about the solution, and 10% of the time typing it out in code. (Yeah, I pulled those numbers out of my ass, but you get the idea)

So back to our beer analogy, challenge your suppositions. Do we necessarily have to go over to the beer? Can we make the beer somehow come to us instead? Thus we fire up our trusty speech API and yell the magic words, “Bitch! Get me a beer!”. Wait ten seconds (during which you can do other stuff), and a freshly opened beer appears magically in your hand. That, my friends, is what we call elegant programming!

Top 5 Game Development Lessons of 2010

Posted: December 30th, 2010 | Author: | Filed under: Soapbox, Tutorials | No Comments »

As we come to the end of a new year, it is good to look back and observe what we have learned in the hopes that we can all become better game developers. Here, we refrain from looking at the techie nerdgasm bits that are at best transient, and look at fundamentals that we will want to keep through the years to come.

5. “No Nonsense Game Development” is not always enforceable

I thought that I would never say this, but I have met my match as far as keeping down the “nonsense” in game production goes. Under normal circumstances, this should never happen as it is kept in check by the management, who has a high stake in the success of the project. However, just because it shouldn’t happen has never stopped anybody before. The root cause in this case was so fundamental that it cannot be slapped away by throwing out catchy phrases like “agile development”. Instead it is the decidedly human (yes, somehow it’s always humans at fault isn’t it?) traits of insecurity and conceit. Unlike bad communication or management which can be remedied with proper instruction, the human factor can only be conquered by each individual from within.

4. The importance of technical preproduction increases with team size

I have always been a fan of individual programmer creativity. I generally like to modularize the components of the game, give each one a specification and let the programmer create the black box that handles that task. Landing in a project after pre-production and having barely more a week to create a basic framework is sheer madness. However, the pressure of an idle workforce and the hubris in my ability to consistently produce miracles convinced me to “rise to the challenge”.

Of course, it didn’t help that the team was about 10 people strong with a majority of junior programmers and interns. The correct course of action would have been to bite the bullet and halt production, so I could lend more in the way of structure and guidelines, as well as general team training. This oversight cost the project dearly. Would it have been enough to save it? That’s anybody’s guess. Will I learn from it? You bet!

3. Web presence is well worth the time investment for B2B marketing

This was an interesting year in the sense that I spent zero time on face-to-face big event networking, mainly because I was preoccupied with ongoing projects. However, the information provided on this website, coupled with maintaining a strong presence on networks like LinkedIn and Facebook appeared to keep the job opportunities streaming in at a respectable rate. Even now, less than a week after I stepped away from my prior post, I have not one but two offers from separate parties for my next endeavour.

The other advantage of online social networking is that it allows you to gauge the mood, disposition and morale of your staff. They are more likely to vent online, even if they have to do so obliquely lest they incur the wrath of the NDA-toting lawyers. Of course, if they don’t trust you, you will never get to see any of this, Facebook buddies or no. So yes, once again technology is no substitute for the human touch.

2. Time waits for no man

When we doing tests on the iPhone 3GS back in April, we discovered that it had superior graphical capabilities, an order of magnitude more than the previous model. This of course opened the door to a wide range of possibilities and hastened us down the road of ambition and scale. Eight months later, while we still have nothing more than a prototype to show for it, the Unreal Engine has made its way onto our platform, bringing with it the likes of Infinity Blade that while simple in gameplay, takes our breath away in awesomeness. We dragged too long and missed the window.

Perhaps a better strategy would have been to focus more on core gameplay fundamentals rather than technology. While tech is flashy yet fleeting, design is eternal and beautiful.

1. The number one priority for game development companies should be their core development team

Some years back, when Ubisoft was first setting up shop in Singapore, I went in for an interview out of curiosity. What they told me blew away for the totally wrong reasons. They stated that they wished to spend one whole year just training up the team and integrating it with the rest of the Ubisoft family before doing anything serious. At the time I was thinking, “OMG! 1 year without portfolio pieces!”

What I missed was the wisdom of this approach. Your company can only go as far as your team can bring you. If you have a strong team, you can do great things. If you, like many (I hesitate to say all) startup games studios in Singapore, just dive into it hoping to learn as you go along, that team can only accomplish that much. Worst still, if you fail to maintain the team that you have (i.e. high staff turnover), you will forever be stuck at that low but very flat plateau. A company’s crown jewels are not its assets or its vision, but its people.

I don’t know for sure if Ubisoft followed through with their one year program, but from the media I have gathered that they have contributed rather sizable chunks to the Assassin’s Creed franchise. I’ll say it again… you need a strong team to accomplish great things.