Progress isn’t made by early risers. It’s made by lazy men trying to find easier ways to do something.

Shaders 101

Posted: July 23rd, 2011 | Author: | Filed under: Tutorials | No Comments »

As indie or casual game developers, we seldom get the chance to play with shaders. The reason why is that there simply is no need to get ourselves involved in all that “stuff”. The engines that we work with come with most of the stock shaders that we need, so all that is required is a basic understanding of what they are and what they do. Kind of like suntan lotion, you don’t question too deeply how it works. You just slap it on.

Recently however, I have had the occasion to dabble in shaders and found that it is really not all that complicated. So here’s a little overview to get you started.

The Triumvirate

Some time ago, shaders were written purely in assembly, much like how normal programs were written a longer time ago. The next natural step in evolution was of course to spawn a set of high level languages to deal with the complexity. HLSL was coined by the folks at Microsoft. It works in tandem with DirectX, meaning it’ll run cross-platform as long as by cross-platform, you mean Windows and Xbox. GLSL was spawned by the hippie open-source community and works as a direct extension of OpenGL, so it will run everywhere else. The folks at NVidia came up with CG which doesn’t really run anywhere but instead compiles into both HLSL and GLSL, presumably saving time writing shaders in both languages.

Structurally, they are all the same. It is only a matter of syntax and code rearrangement, if you ignore some of the more esoteric features of each language. All of them give you the power to manipulate matrices and vectors as easily as you would mere integers in traditional programming. They are also designed to be (potentially at least) massively parallel so more things can be done faster, which is generally a good thing. The price, however, is limitations in what you can actually do within your shader.

The Pipeline

The programmable pipeline as they like to call it, differentiates itself from the more traditional fixed pipeline in the sense that it is, well, programmable. This gives the programmer (or artist, if the dirth of shader generation tools keep growing) more power over the rendering process.

The old way of doing things was that you’d fill up vertex buffers with vertices, index bufferes with indices, strategically position a few lights and define a few global parameters. After you have everything configured sweetly, hey presto, there’s your pretty picture! The main errors students commit using this traditional workflow are trivial things like… do you have a light in the scene? Is your camera pointed the right way? Is the object so small that it is occupying less than one pixel on screen? Ahh… life was simple back then!

Nowadays, you start of with a set of vertices. These could come directly from meshes or they could be generated from geometry shaders (I don’t know much about geometry shaders so I’m not going to say much about them). It doesn’t matter. Each and every vertex gets run through your vertex shader program. This happens four, eight or however many channels your graphics card has, at a time. As a result, you  can’t get information on the other vertices (unless you are especially clever and sneaky about it), but you can pretty much do whatever you want with the vertex you have. Change its position, its color, its normal or any other of its attributes. Totally up to you. Of course, with great power comes great responsibility, and with great responsibility comes greater potential to f*** things up.

Once all the vertex processing is done, your vertices get rasterized. Usually, by that, we mean that they get transformed to 2D screen coordinates, but that isn’t exactly true. It is the job of your vertex shader to transform the vertices into 2D coordinates (it’s just a matrix multiplication, not hard). What the rasterizer does is it goes through the pixels between your vertices and interpolates the values. So if you have one vertex that is red and another that is white, all the pixels in between will be varying shades of pink. This doesn’t just happen for your colors, but also for normals and whatever other attributes you may have chosen to endow your vertices with.

Each of these pixels is then fed into your pixel (if you are from Microsoft) or fragment (if you are a hippie) shader for further processing. There, you can do a whole bunch of other complex or simple operations, simply to figure out what color you want to paint that particular pixel. As per your vertex shader, you only have access to that one pixel throughout your shader, and the pixels themselves get pushed through n channels at a time, depending on how much your vendor crippled your graphics card via firmware.

Shader Anatomy

Finally we get to the code part of things. I’m going to compare CG and GLSL, mainly because CG and HLSL code look pretty similar, and CG has less letters.

Data types

There are four general data types to be found in a shader language.

  • There’s the basic floating point number, which all three languages have aptly named “float”.
  • There’s the vector of varying sizes, called float2, float3 and float4 in CG, or vec2, vec3 and vec4 in GLSL
  • There’s the matrix of varying sizes, called float2x2, float3x3, float4x4 in CG and mat2, mat3 and mat4 in GLSL
  • There are textures of varying dimensions, called texture2D, texture3D and texture4D in CG and sampler2D, sampler3D and sampler4D in GLSL (don’t let 4D textures blow your mind!) These are essentially the equivalent of lookup tables.

Of course, there are a lot more actual data types than these, most having to do with fixed-point math or irregularly shaped matrices. You also have bools and stuff, but you can figure those out on your own.

Types of Variables

As you may or may not have gathered from the section on pipelines, much of the idea behind shaders is about data flow and how it trickles down into the pixels. As a result, we have four types of data.

Uniform variables are stuff that you just sorta shove into the shader. How they get in is up to you. Some have them as part of the exporter from Max or Maya. Some have them built into the level editor. As long as you feed your shader before you run it, all will be well. Here’s your world matrix, a handful of numbers that should mean something and a few vectors for good measure. glhf! CG embraces the evil and defines these as simply global variables within the shader code. GLSL is more circumspect and demands that you use the keyword “uniform” to denote them. GLSL also has a whole bunch of predefined uniform variables just to keep everybody on the same page.

CG:
float time;
float amplitude;
float frequency;
float ambient;
float4 lightdir;
texture2D tex0;
GLSL:
uniform float time;
uniform float amplitude;
uniform float frequency;
uniform float ambient;
uniform vec4 lightdir;
uniform sampler2D tex0;

Uniform variables aren’t your only form of inputs. There are also vertex attributes. These are attributes that are defined as part of your vertex data structure. They can include mundane things like color, normals and uv coordinates, as well as customized data that only has significance to your skillfully crafted shader. How do you generate these vertex attributes? They can either be pre-calculated, or more often than not, are “painted” in by the 3D artist.

CG:
struct a2v
{
float4 position : POSITION;
float4 color : COLOR0;
float4 normal : NORMAL;
float2 texcoord: TEXCOORD0;
}
GLSL:
//attribute vec4 gl_Vertex already defined
//attribute vec4 gl_Color already defined
//attribute vec4 gl_Normal already defined
//attribute vec2 gl_MultiTexCoord0 already defined

In CG, all you do is define a struct and dump whatever vertex information you want into it. This will later be fed into your vertex shader. You can call your attributes whatever you like, but may need to tag them for deciphering purposes. For GLSL, you use the keyword “varying”. Like uniform variables, GLSL has a lot of predefined vertex attributes to cover the usual suspects.

Varying variables are what your vertex shader spits out, and is then fed into your pixel/fragment shader. Remember that by the time they reach the pixel shader, you will more often than not be working on interpolated data. In CG, you define another struct to hold this intermediate information. In GLSL, you use the keyword “varying”. As always, there are a bunch of predefined varying variables.

CG:
struct v2p
{
float4 position : POSITION
float4 color : COLOR0;
float4 normal : NORMAL;
float2 texcoord : TEXCOORD0
}
GLSL:
//varying vec4 gl_Position already defined
//varying vec4 gl_FrontColor already defined
varying vec4 normal;
//varying vec2 gl_TexCoord0 already defined

 

Finally, you have the output from your pixel shader. In CG this is, you guessed it, another struct. In GLSL, there’s no keyword. You just use the predefined variables, namely gl_FragColor.

CG:
struct p2f
{
float4 color : COLOR0;
}
GLSL:
// gl_FragColor already defined

The Shader Functions

After you have defined the data that you are going to be playing with, all that is left is to write the shaders themselves. Each shader is a self-contained function. In GLSL, your vertex and fragment shaders are usually in different files, and the entry point for each one is called “main()”. For CG, however, you can have multiple shaders in the same file and call them whatever you want. Whatever the case, during the actual render process, one program, comprising of exactly one vertex shader and one pixel shader, will be run per mesh per pass.

Here is a vertex shader that bounces the mesh around a bit.

CG:
v2p mainV(a2v IN)
{
v2p OUT;
OUT.normal = IN.normal;
OUT.color = IN.color;
OUT.position = IN.position + amplitude * sin(time*frequency);
OUT.texcoord = IN.texcoord;
return OUT;
}
GLSL:
void main()
{
normal = gl_Normal;
gl_FrontColor = gl_Color;
vec4 newpos = gl_Vertex + amplitude * sin(time * frequency);
gl_Position = gl_ModelViewProjectionMatrix * newpos;
gl_TexCoord0 = gl_MultiTexCoord0;
}

And here’s a pixel shader that computes lighting and figures out what color to render based on the supplied texture, vertex color, directional light and ambient lighting. Note that it would have been more efficient to calculate the texture color as well as lighting in the vertex shader rather than here, so that it is calculated per vertex rather than per pixel. Food for thought… think of all the neat hacks you can do! Also think of all the side-effects!

CG:
p2f mainF(v2p IN)
{
p2f OUT;
float NdotL = dot(IN.normal, lightdir);
OUT.color = IN.color * tex2D(tex0, IN.texcoord) * NdotL + ambient;
return OUT;
}
GLSL:
void main()
{
float NdotL = dot(normal, lightdir);
gl_FragColor = gl_FrontColor * texture2D(tex0, gl_TexCoord0) * NdotL + ambient;
}

 



Leave a Reply