They don’t make bugs like bunny anymore/..

What is a Depth Buffer?

Posted: March 26th, 2014 | Author: | Filed under: Tutorials | 1 Comment »

How do you explain what depth-write and depth-test means to a non-technical artist or designer? The obvious answer is you search Google for a well-written introduction to the concept of depth buffers, and point them in direction. Somehow, my Google-fu failed this time, and I could not find an article without scary OpenGL code or mathematical formulae. So, here I am, writing this out. My public service for the year.

The Painter’s Algorithm

So let’s go back to the beginning when painting was a simple matter of putting brush to canvas. If a painter of yore were to draw the Success Kid meme, how would he go about doing it? It would probably go something like this:

Painters Algorithm

Painters Algorithm

You start with a blank canvas, draw in the background, followed by the kid, and finally the text. Notice that you are layering things on top of another from the back-most to the front-most. This method is called the Painter’s Algorithm. So why can’t computers do this too? Well, the answer is, they can! The only problem is that it is slow, because we are drawing a lot of pixels that won’t eventually make it to the screen. Consider the part of the background behind the kid. We can’t see it, but yet we are drawing it, only to overwrite it again later.

So how expensive is it really to draw a pixel? Consider the obviously high-poly, normal-mapped, specular and fresnel lit Success Kid model. There are two factors under your control that directly affects how long it takes to render it on screen. The first is your vertex cost, which is roughly the complexity of your vertex shader multiplied by the number of tris, verts or polys that make up your model. The second is the pixel cost, which is the complexity of your pixel shader multiplied by the number pixels the polygons are occupying on screen (including all the alphaed-out bits).

For better or for worse, the shiny stuff that all artists love (and therefore cost the most), usually reside in the pixel shader. Per-pixel lighting, normal maps, environment maps… they all severely skyrocket the price you pay for each pixel. That’s why for shiny stuff, we want to minimize the number of pixels that are drawn as much as possible, especially those that don’t need to be considered.

Depth Sorting

It didn’t take long for somebody to come up with the bright idea of drawing things in reverse order. If we draw the kid before the background, we wouldn’t have to draw the part of the background that is hidden (or if you want to sound real smart, occluded). So the draw sequence would go something like this:

Front to Back

Front to Back

So now before we draw each pixel, we just need to check and see if that pixel has already been drawn. If it is already there, we skip the pixel at proceed to the next one. As Farengar Secret-Fire would say, “Simplicity itself!”.

This is where the depth-buffer comes in. The depth buffer serves as a record of what pixel is drawn and what is not. So as we draw things on screen (i.e. the color buffer), we record down in the depth buffer that we have drawn to this pixel. This is called depth-write. The next time we want to draw to the same pixel, we check the depth buffer to see if the pixel has already been drawn to, and if so, don’t do it. This is called depth-test.

I know the question that’s burning a hole through your brain. “If it is simply a record of what pixel is drawn, why do we call it a depth buffer?”

What, it wasn’t? Well now it is, isn’t it? Before we answer that, let’s take a look at a problem we conveniently glossed over.

The Alpha Problem

Let’s revisit that approach with the full image:

Alpha Problems

Alpha Problems

Oops, do you see what happened there? Our “Success!” caption was drawn on a single quad, with all the transparent bits alphaed out. Since it is front-most, we draw that first. When we draw the kid, it sees that those pixels are already drawn to, albeit with zero alpha, and doesn’t draw to it, so we never get to see the pixels that are behind the space between the letters.

There are a number of ways to solve this problem. We could model each of the letters individually, so that the texture is fully opaque. The problem with that is that your vertex count increases a lot, especially if you want smooth curves. Also, it doesn’t handle the case of translucent pixels  (i.e. with partial alphas), that would still need information on what’s behind it. This might still work for some cases where the vertex overhead is not that big. If you can get away with this, do it!

Another way is alpha rejection. Whenever the pixel shader sees that what is about to be drawn has an alpha value below a certain threshold, don’t draw it! This solves the cost of having many vertices, but still doesn’t solve the translucency problem. This also ruins the day of some graphics chips (most notably, the PowerVR chip found in iPhones and some Android devices)  which optimize around the idea of not having pixels being rejected by the shader.

The ultimate solution is acknowledging that we don’t really have a clue of what to do, and go back to the Painter’s Algorithim. However, this time, we can be smarter about it. Divide all our objects into two groups—those with alpha, and those that are fully opaque.

First, we draw all the opaque geometry sorted front to back, making use of the depth buffer. However, instead of just storing a plain “Have we written to this pixel”, we store the depth value. (i.e. how far is this pixel away from the camera) After that, we draw all the alpha geometry from back to front, checking the depth buffer against the depth value of the pixel we are about to draw. If we find that we are drawing behind something that’s already drawn, don’t draw it because we know that it will be occluded by an opaque object. Otherwise, we would draw on top of it, blending with any pixel information that might already be present. Since we are drawing from back to front, there is no need to waste time writing to the depth buffer.

Opaque followed by Alpha

Opaque followed by Alpha

This leads us to the general rule:

  • For Opaque Geometry: enable depth-write, enable depth-test
  • For Alpha Geometry: disable depth-write, enable depth-test

You should also be aware of how to let the engine or game know what is opaque and what is not. In most editors, you can simply disable blending. This is really important. An opaque object with blending on will still look right, but will incur the cost of a transparent object.

As your astute mind would have observed, using a lot of overlapping alpha leads to copious amounts of overdraw (redrawing the same pixel). The more screen space this takes up, the worse it gets. Thusly, we finally arrive at the moral of the story.

Alpha is evil! Shun it like the cancer that it is!

 

 

Common Artifact 1: Intersecting Alpha Geometry

If you have two pieces of alpha geometry that intersect or interlock each other, which one do you render first? Whichever one you pick, the one that is rendered first would not have the pixel information of the other, so blending will never be accurate. Stop tormenting your poor programmer… there is nothing he can do! This is further proof that alpha is evil.

Also, if you do tricks that offset the origin of a model, which messes up the sort order, you might get the same artifact we saw with the “Success!” text above. Your programmer can solve this problem by selectively sorting it, but usually at a cost. Tricks are evil too!

Common Artifact 2: Z-fighting

Occasionally, when two pieces of geometry are very close together, you may either see them flicker or observe moving stripes on them. This is because you have hit the limits of the depth buffer. The depth buffer is not continuous. You can envision it as dividing the space between the near plane and far plane of the camera into slices. If the pixels you want to render are on the same slice, the poor GPU is going to get confused which is the one true pixel. There are a number of solutions

  • A 16-bit depth buffer would have 65,536 slices. A 24-bit buffer would have 16,777,216 slices. A 32-bit buffer would have 4,294,967,296 slices. So the obvious solution is to increase the size of the depth buffer. However, depending on the hardware, this is not always possible.
  • You could move the objects further apart, which is the most common sense thing to do. However, visually, it might look a little off.
  • Another trick is to adjust the camera so that the far-plane is closer to the nearplane . This will squish the slices closer together, making them thinner, and therefore decrease the chance of z-fighting.
  • You can move the offending objects closer to the camera.

That last one needs some explaining. If you have a purely orthographic camera (i.e. there is no perspective. Far away objects don’t appear smaller than closer objects), all slices are equally spaced from the near plane to the far plane. However, for perspective cameras, these slices are not born equal. Instead, they are bunched up close the near plane and get exponentially further apart as you approach the far plane. There are tricks that your programmer can do in the shader to reduce this effect, but as noted in the previous section—tricks are evil!


One Comment on “What is a Depth Buffer?”

  1. 1 Alex said at 11:39 pm on April 3rd, 2014:

    Ha-ha, good article 🙂 It feels like artists should be clever too – not just programmers.

    And tricks are indeed evil!


Leave a Reply