Bump Mapping using GLSL

By Jérôme 'JeGX' GUINOT
jegx [at] ozone3d [dot] net
Initial draft: November 4, 2005
Update: November 18, 2005
Update: March 8, 2006
Update: December 30, 2006

[ Index ]

Introduction | Page 2 | Page 3 | Page 4

»Next Page

1 - Introduction
2 - Lighting equations
3 - The tangent space
4 - The bump mapping shader
5 - Downloads

1 - Introduction

The bump mapping is one of the per pixel lighting techniques, which means that all the lighting calculations (application of the light equations) are calculated for each pixel. The power of the current graphics processors (or gpu) makes it possible to reach this precision while preserving acceptable frame rates.

Indeed, the demonstration which accompanies this tutorial shows a sphere with a simple texturing rendering that features a per vertex lighting (torus_simple_mapping_test.xml) and the same sphere rendered with the bump mapping technique (torus_bump_mapping_test.xml). In the first case, the FPS is around 900 (usual rate on a development workstation, with a GF7800GT graphic controller) and in the second case, the FPS falls around 500 but with a much better rendering quality as shown on the following images:


fig.1 - Simple texturing rendering with per vertex lighting.


fig.2 - Bump mapping rendering with per pixel lighting.

But these differences between the performances are due to two major reasons: first, the bump mapping is basically a multitexturing technique. In our case, two textures are used for the bump effect. Accessing several textures is more penalizing than dealing with one texture. The second reason is that all the lighting calculations are made for each pixel.

Now, let us analyze the bump mapping technique. The main goal of the bump mapping is to simulate some relief on a flat geometry. That makes it possible to render at a lower CPU cost, objects that have a very detailed appearance. The bump mapping technique is very simple since it consists in using a normal vector that is deformed at the level of the pixel being processed. But before going further in the details of the code, it is first necessary to recall the light equations which we will implement.

2 - Lighting Equations

The final color of the pixel displayed on the screen is given by the following equation:

If = Ia + Id + Is

where If is the intensity of the pixel final color, Ia is the intensity of the ambient color, Id is the intensity of the diffuse color and Is that of the specular color. For more explanations on these various components, please refer to the Lighting & Materials tutorial (soon available).

Ia, Id and Is are all four-dimensional RGBA vectors.

The Ia term is the ambient component. Ia is the result of the multiplication between the ambient component of the light and that of the material which composes the surface of the 3d object:

Ia = Al * Am

where Al is the ambient component of the light and Am that of the material. Ia is generally a constant RGBA vector, and this value is the same one independently from the pixel. We will see in another tutorial a more advanced expression of this ambient term with the technique known as the Ambient Occlusion Lighting.

The Id term expresses the final diffuse component. This component is given by the following equation:

Id = Dl * Dm * LambertTerm

where Dl is the diffuse component of the light and Dm that of the material. The LambertTerm factor is the keystone of the lighting equations. It is indeed the value of this factor which will make it possible to create the self shadow of a 3d object (self-shadowing). This Lambert coefficient is calculated with the following dot product:

LambertTerm = max( N dot L, 0.0)

where N is the normal vector to the considered pixel and L the light vector at the same pixel. This simple relation but so fundamental, tells us that the value of the Lambert coefficient will be maximum (1.0) if the angle between the two vectors (L and N) equals zero, i.e. if the pixel is directly in front of the light. For all the other cases, the Lambert coefficient will vary between 0.0 and 1.0 what will generate the self shadow.

The max() function is just there to prevent us from having a negative value for the Lambert term.

Update: March 8, 2006:

The Is term expresses the final specular component. This component is obtained by:

Is = Sm x Sl x pow( max(R dot E, 0.0), f )

The Is term is from far the most complicated to calculate but it is responsible of these famous specular reflections on the surface of the objects. Sl is the specular component of the light and Sm that of the material. E is the view vector or camera vector and R is the light L reflected vector in relation to the normal N. R is obtained with:

R = reflect(-L, N)

where N is the normal vector to the pixel considered, L the light vector and reflect() a function (available in GLSL) which makes it possible to calculate the reflexion vector of L in relation to N. The pow() function is the power function which makes it possible to raise a number N to the power of p: pow(n, p). f is the specular exponential factor (the famous shininess in OpenGL) which represents the hardness and the precision of the specular reflection.

These small explanations show us the importance of the N normal vector. In traditional rendering process, the N vector at the level of a pixel results from the interpolation of the three normal vectors of the three vertices which form the current face of the 3D object. In this case the variations of the N vector are very small on the current face.

The bump mapping technique precisely consists in giving more life and sparkling to this poor N vector by drawing for each pixel a normal vector from a normal-map. For more details on the normal map, please refer to this tutorial: Normal Maps. The textures used for this tutorial are the same ones as those of the normal-maps tutorial.

3 - The tangent space

Now let us see another point, more significant from the bump mapping implementation point of view than from the global understanding of the bump mapping itself. It is about the vertex space more known as the tangent space. This space is in fact a frame of reference attached to each vertex in which the position of the vertex is {0.0, 0.0, 0.0 } and the coordinates of the normal vector to the vertex are {0.0, 0.0, 1.0 }.

The three vectors forming this orthonormal frame of reference are named tangent, binormal and normal with:
tangent vector = {1.0, 0.0, 0.0} or X axis
binormal vector = {0.0, 1.0, 0.0} or Y axis
normal vector = {0.0, 0.0, 1.0} or Z axis.

The things will start to become clear assuming that the normal vectors stored in the normal map are expressed in the tangent space. This explains the bluish color of the normal-map because most vectors are directed along the Z axis. If the normal-map was expressed in the frame of reference of the object, there would be vectors directed along the 3 axes of the object local reference mark, which would lead to bluish, green and red zones.

Ok, if I talk about this, it is because the realization of a normal-map is easier when the normal vectors are expressed in the tangent space. Moreover, most of the bump-maps creation tools use normal vectors in the tangent space (the nVidia plugins for PhotoShop or the tool provided by ATI are good examples).

The problem is that the normal vector is expressed in the tangent space whereas the other vectors used for calculations (light and view vectors) are expressed in another reference space (the camera space). It is thus necessary to express all these vectors in the same single reference space so that calculations (mainly dot product) can have a meaning. This reference space is the tangent space. The following matrix algebra shows how to pass the L light vector, expressed in the camera space, to the tangent space:

|x| |Tx Ty Tz| |Lx|

|y| = |Bx By Bz| x |Ly|

|z| |Nx Ny Nz| |Lz|

Such a matrix calculus can be replaced by 3 dot products:

x = L dot T

y = L dot B

z = L dot N

where x, y and z are the coordinates of the light vector expressed in the tangent space and where the TBN vectors are expressed in the camera space...

There is still a small detail to be cleared up. How do we get the T, B and N vectors for each vertex. For the N vector, no problem, since it is the normal to the vertex provided by the 3d engine and may be found in the GLSL code as the as the gl_Normal vector. The new thing is the T vector which is also provided by the 3d engine but as a vertex attribute (attribute key word in GLSL code).

Once we get the T and N vectors, B vector calculus is done by a cross product between T and N...

Update: December 30, 2006:

The following tutorial provides a method to compute the T and B vectors from N: Tangent Space Computing.

I think that the essence of the theoretical implementation of the bump mapping is clarified. It is time to go inside the GLSL code for really understanding some-thing there...


fig.3 - The normal-map in the tangent space.

4 - The bump mapping shader

The GLSL bump mapping shader code is given below:

[Vertex_Shader]

varying vec3 lightVec;

varying vec3 eyeVec;

varying vec2 texCoord;

attribute vec3 vTangent;

void main(void)

{

gl_Position = ftransform();

texCoord = gl_MultiTexCoord0.xy;

vec3 n = normalize(gl_NormalMatrix * gl_Normal);

vec3 t = normalize(gl_NormalMatrix * vTangent);

vec3 b = cross(n, t);

vec3 vVertex = vec3(gl_ModelViewMatrix * gl_Vertex);

vec3 tmpVec = gl_LightSource[0].position.xyz - vVertex;

lightVec.x = dot(tmpVec, t);

lightVec.y = dot(tmpVec, b);

lightVec.z = dot(tmpVec, n);

tmpVec = -vVertex;

eyeVec.x = dot(tmpVec, t);

eyeVec.y = dot(tmpVec, b);

eyeVec.z = dot(tmpVec, n);

}

[Pixel_Shader]

varying vec3 lightVec;

varying vec3 eyeVec;

varying vec2 texCoord;

uniform sampler2D colorMap;

uniform sampler2D normalMap;

uniform float invRadius;

void main (void)

{

float distSqr = dot(lightVec, lightVec);

float att = clamp(1.0 - invRadius * sqrt(distSqr), 0.0, 1.0);

vec3 lVec = lightVec * inversesqrt(distSqr);

vec3 vVec = normalize(eyeVec);

vec4 base = texture2D(colorMap, texCoord);

vec3 bump = normalize( texture2D(normalMap, texCoord).xyz * 2.0 - 1.0);

vec4 vAmbient = gl_LightSource[0].ambient * gl_FrontMaterial.ambient;

float diffuse = max( dot(lVec, bump), 0.0 );

vec4 vDiffuse = gl_LightSource[0].diffuse * gl_FrontMaterial.diffuse *

diffuse;

float specular = pow(clamp(dot(reflect(-lVec, bump), vVec), 0.0, 1.0),

gl_FrontMaterial.shininess );

vec4 vSpecular = gl_LightSource[0].specular * gl_FrontMaterial.specular *

specular;

gl_FragColor = ( vAmbient*base +

vDiffuse*base +

vSpecular) * att;

}