Light and Shadow Mapping
Using Texture Mapping Techniques to Simulate Lights and Shadows in Real Time Computer Graphics
Author: Tomasz Zaniewski
November 2000
Using Texture Mapping Techniques to Simulate Lights and Shadows in Real Time Computer Graphics
Author: Tomasz Zaniewski
Supervisor: Professor Shaun Bangay
November 2000
Submitted in partial fulfillment of the requirements for the Bachelor of Science (Honours) degree in the Department of Computer Science at Rhodes University.
Acknowledgements
Firstly, I would like to thank my supervisor, Shaun Bangay, for his assistance in this project and for finding some serious bugs in my code.
Secondly, I would also like to thank my parents, for giving me all the support that made it possible for me to engage in postgraduate studies.
I would also like to than the Computer Science Department of Rhodes University for providing and giving access to all the resources necessary for completing the project.
Table of contents
Chapter 1: Introductory Chapter ………………………………………………………………….1
1.1 Abstract….………………………………………………………………..………….1
1.2 Development tools …………………………………………………………………………………2
1.3 Layout of document ...……………………………………………………………………………...2
Chapter 2: Background to Lighting and Texture mapping…………………………………….4
2.1 Introduction………………………………………………………………………………………….4
2.2 Light………………………………………………………………………………………………….4
2.3 Shading………………………………………………………………………………………………7
2.3.1 Flat shading……………………………………………………………………………...7
2.3.2 Gouraud shading………………………………………………………………………..7
2.3.3 Phong Shading…………………………………………………………………………..8
2.4 Texture Mapping…………………………………………………………………………………….8
2.5 Other mapping techniques…………………………………………………………………………10
2.5.1 Affine Mapping…………………………………………………………………………..11
2.5.2 Perspective texture mapping…………………………………………………………..11
2.5.3 Bump Mapping…………………………………………………………………………..11
2.5.4 Environment mapping…………………………………………………………………..11
2.6 Summary……………………………………………………………………………………………..12
Chapter 3 Light Mapping: Background and Design……………………………………………..13
3.1 Introduction…………………………………………………………………………………………..13
3.1.1 Problem statement………………………………………………………………………13
3.2 Introduction to light mapping……………………………………………………………………….14
3.2.1 Why use light maps?………………………………………………………………………………15
3.2.3 Static light mapping………………………………………………………………………16
3.2.3.1 Disadvantages of using static light maps…………………………………22
3.2.4 Dynamic light maps………………………………………………………………………22
3.2.4.1 Disadvantages of using dynamic light maps………………………………23
3.2.5 Three-dimensional light maps…………………………………………………………..23
3.3 Related work………………………………………………………………………………………….24
3.3.1 The H-Test………………………………………………………………………………..24
3.3.2 Generalization of the shading Model…………………………………………………..28
3.4 Design…………………………………………………………………………………………………29
3.4.1 Goal of the model……………………………………………………………………….29
3.4.2 The concept……………………………………………………………………………...30
3.4.3 The mapping function…………………………………………………………………...31
3.4.4 Structure of spherical light map………………………………………………………..32
Rectangular co ordinates…………………………………………………………….32
Spherical co ordinates………………………………………………………………..32
3.4.4.1 Structure of light map using rectangular co ordinates……………………33
3.4.4.2 Structure of light map using spherical co ordinates………………………33
3.4.5 The light object……………………………………………………………………………35
3.4.6 Generating the light map………………………………………………………………..35
3.4.7 Combining light maps……………………………………………………………………35
3.4.8 Rotating the light map……………………………………………………………………36
3.4.9 Using the light map……………………………………………………………………….36
3.4.10 Design of experimental work…………………………………………………………..36
3.4.10.1 Correctness………………………………………………………………….37
3.4.10.2 Speed…………………………………………………………………………37
3.4.10.3 Accuracy……………………………………………………………………...38
3.5 Summary………………………………………………………………………………………………38
Chapter 4 :Light Mapping: Implementation…………………………………………………………39
4.1 Introduction……………………………………………………………………………………………39
4.2 Object representation………………………………………………………………………………..39
4.3 The Transformation System…….…………………………………………………………………..42
4.4 Transformation of normal vectors…………………………………………………………………..43
4.5 The light object………………………………………………………………………………………..45
4.6 Generating the light map…………………………………………………………………………….46
4.7 The mapping function………………………………………………………………………………..48
4.8 Rotating the light maps………………………………………………………………………………50
4.9 Combining light maps………………………………………………………………………………..53
4.10 Summary………………………………………………………………………………………….….53
Chapter 5: Results and conclusion…………………………………………………………………..54
5.1 Results…………………………………………………………………………………………………54
5.1.1 Correctness………………………………………………………………………………..54
5.1.2 Speed………………………………………………………………………………………59
5.1.3 Accuracy…………………………………………………………………………………..60
5.2 Conclusion…………………………………………………………………………………………….61
Chapter 6 Shadow mapping: Background and related work……………………………………63
6.1 Problem statement……………………………………………………………………………………63
6.2 Introduction……………………………………………………………………………………………63
6.3 Shadow algorithms…………………………………………………………………………………...64
6.3.1 The scan line algorithm…………………………………………………………………..64
6.3.2 The transformation algorithm……………………………………………………………65
6.3.3 The shadow volume algorithm…………………………………………………………..65
6.3.4 The shadow buffer algorithm…………………………………………………………….67
6.3.5 Projected Shadows……………………………………………………………………….67
6.4 Related work…………………………………………………………………………………………..68
6.4.1 Generating soft shadows using the shadow volume algorithm……………………...68
6.4.2 Speeding up the shadow buffer…………………………………………………………69
6.4.3 Incorporating the shadow buffer into real time graphics……………………………..70
6.5 Examples of shadows in real time graphics……………………………………………………….70
Chapter 7: Shadow mapping: Design……………………………………………………………….72
7.1 Introduction and problem statement……………………………………………………………….72
7.2 Design…………………………………………………………………………………………………73
7.2.1 The shadow map…………………………………………………………………………73
7.2.2 Transforming into light co ordinates……………………………………………………73
7.2.3 Silhouette extraction……………………………………………………………………..74
7.2.4 Superimposing the texture………………………………………………………………75
7.2.5 The texture mapper………………………………………………………………………75
7.2.6 Testing…………………………………………………………………………………….75
7.3 Summary………………………………………………………………………………………………75
Chapter 8: Shadow mapping: Implementation……………………………………………………..76
8.1 Introduction……………………………………………………………………………………………76
8.2 The Shadow map…………………………………………………………………………………….76
8.3 Creating the transformation matrix………………………………………………………………….77
8.4 Drawing the silhouette……………………………………………………………………………….78
8.5 Superimposing the shadow map on the texture…………………………………………………..79
8.6 The texture mapper…………………………………………………………………………………..81
Chapter 9: Results and conclusions…………………………………………………………………82
9.1 Results…………………………………………………………………………………………………82
9.2 Conclusions………………………………………………………………………………………….83
list of references ………………………………………………………………………………………84
Appendix A…………………………………………………………………………….87
Appendix B…………………………………………………………………………….92
Chapter 1
Introductory Chapter
1.1 Abstract
Texture mapping was born as a technique to add realism to a scene by wrapping a bitmap onto a polygon and making it appear as though the bitmap is going through the same transformations as the polygon. From then it has grown considerably. Today it is used to simulate things like shiny surfaces (environment mapping), rough surfaces (bump mapping) without adding any additional geometry. Recently, texture mapping has been used to simulate lights and to a certain extent shadows.
The research here shows just how applicable texture mapping techniques are to simulating lights and shadows. This is done by using texture mapping techniques to replace a light in the scene by a light map and than showing that the scene is still illuminated in the same manner as with the conventional light. The research also investigates the effects of combining light maps and rotating the light maps. It also shows that the light map does improve performance of the Phong shading algorithm.
In the area of shadows, the paper examines some of the most popular shadow casting algorithms and then implements a simple shadow casting algorithm to show how texture mapping can be used to cast shadows onto multi planar surfaces.
1.2 Development tools
The code written in this project needs to be able to modify the way in which polygons are rendered at pixel level. Tools such as OpenGL and DirectX do not allow the programmer to do this. Take lightning as an example, with these tools a programmer may select if the scene is to be rendered using the flat, Gouraud or Phong shading algorithm. A programmer may not however change the way these shading models are implemented and he/she cannot add her own shading model. For this reason OpenGL and DirectX were not used in the code written.
What is necessary for this project, are tools that can do the following: open a window, plot a pixel at a point with co ordinates x and y, change the color of the point to be drawn and provide me with a drawing area in the form of a chunk of memory. The Qt 2.1.0 library that comes standard with Red Hat 6.2 Linux provides the necessary tools.
The language used to develop the applications is C++. I chose this language because it is the language I am most familiar with. It is also the language of choice when it comes to programming computer graphics. Linux also has an efficient compiler for the language.
The operating system used is Linux. I chose this because I saw it as an opportunity to learn more about the operating system, which is quickly becoming a serious competitor for Windows.
1.3 Layout of document
This report is divided into nine chapters. Each chapter tackles an entirely new topic. I have aimed to write it in a way that has logical progression in it. The first chapter introduces the reader the basic knowledge required in order to understand lighting and texture mapping.
Chapter 1: Introductory chapter
Chapter 2: This chapter provides the reader with some background knowledge in the field of lighting and texture mapping in computer graphics.
Chapter 3: Deals with the concept of light mapping and also looks at work done by others in order to modify the conventional lighting model. It also outlines some of the design issues associated with the light mapping model implemented during the project
Chapter 4: Deals with the implementation of the experimental light map model. It points out some of the most important algorithms and also focuses on problem areas in the implementation and how they were overcome.
Chapter 5: Results of testing the light map model are presented in this chapter. Possible explanations for unexpected results are given and conclusions are drawn.
Chapter 6: This chapter introduces some of the more popular shadow generation algorithms, it looks at extensions made by others to these algorithms and also looks at why they are not used in most real time graphics applications.
Chapter 7: Design of the shadow algorithm implemented during the project is discussed
Chapter 8: The implementation of the shadow algorithm is discussed here.
Chapter 9: Results of the algorithm are presented in this chapter. Conclusions are drawn and recommendations for improving the implementation are made.
Chapter 2
Background to Lighting and Texture mapping
2.1 Introduction
Since this project deals with texture mapping and light I would like to use this chapter to introduce the reader to these concepts. It is also important that these concepts are explained to enable the reader to understand how this relates to the work done in this project.
There is no doubt that the afore mentioned concepts have contributed immensely to the realism of computer generated graphics.
2.2 Light
Figure 2.1 a shows what a three-dimensional cube would look like if a method to simulate light in computer graphics were not available. It would be difficult to distinguish it as a cube because it is impossible to see where its edges are located. Figure 2.1 b shows the same cube but this time it is illuminated by a light.
In order to illuminate the surface of a three-dimensional object one needs to determine how much light that surface is to reflect. From everyday life we know that an object will appear mostly affected by a light source if it is facing the light source. Take the moon and the sun as an example. The parts of the moon facing the sun appear the brightest. While the parts of its surface facing away from the sun are not illuminated at all.
To simulate this in computer graphics one will need to find a method of determining the angle between the surface to be illuminated and the light source. Two vectors are used to solve this problem. One of these is referred to as the normal vector. This vector can be defined as a unit vector being perpendicular to the surface. The vector is calculated by finding the cross product of any two vectors representing two edges of the surface. The second of these two vectors is the vector pointing towards the light source. This vector can be obtained from the position of the light. Once these vectors are found the cosine of the angle between them is calculated be finding the dot product of the two vectors. It is from this value and Lamberts Cosine Law that one can obtain the intensity of the light for the surface in question.
Lamberts Cosine Law states that the effective intensity of a light incident at angle with intensity I is:
Effective intensity = I x cosine () (1)
This is illustrated in figure 2.2. Note that when is 0 then the surface is facing the light source and accordingly cosine (0) is one. From equation (1) it is evident that the effective intensity of the light will be at its greatest. As tends towards 90 so the effective intensity of the light decreases. This is with accordance with the desired effect. The greater the angle between the two vectors, the more is the surface facing away from the light, the less the cosine of the angle becomes and so the effective intensity of the light decreases.
The above discussion was based on diffuse reflection (figure 2.3). In this type of reflection the amount of light reflected is determined by the orientation of the surface to the incident light rays. There are two other types of reflection that should be mentioned: ambient and specular reflection.
- Ambient reflection
Ambient reflection refers to reflection of light rays coming from no particular pointing space. Ambient reflection is attributed to light which first originated at the light source but was scattered around by the environment.
- Specular reflection
In this type of reflection the majority of the incident rays are reflected in a particular direction as indicated in figure 2.4.
Each light also has color. In fact a light has three color components: ambient, specular and diffuse. Each of these determines the color of ambient specular and diffuse reflections respectively.
It is important to point out how color is represented on the video display of a computer. Any color seen on the screen is actually composed of a mixture of three primary colors: red green and blue. These are known as the three components of a color. Each of these components in turn can range from 0 to 255. When it is set to 255 it is at its maximum intensity. A color represented by red = 255, green = 0 and blue = 0 would be a bright pure red. The ambient, diffuse and specular components of a light in turn each have their own red green and blue counterparts.
Once the cosine of the angle between the two vectors has been determined a set of equations is applied to determine the effective strength of the color.
Ambient term = ambient component * ambiance intensity (2)
Diffuse term = diffuse component * diffuse intensity * N.L (3)
Final intensity = ambient + diffuse + specular. (4)
Where N is the normal vector to the surface and L is the vector pointing to the light source.
These calculations are repeated three times: once for each of the red green and blue components of the diffuse, specular and ambient terms. The equations are adapted from [13].
To summarize, in order to illuminate an object one needs to take the following steps:
For every polygon in the object
- Calculate its surface normal
- Calculate the dot product of this normal and the vector pointing towards the light source.
- Apply equations 2 - 5
- Set the red green and blue components
- Draw the polygon
2.3 Shading
2.3.1 Flat shading
The lightning model discussed in section 2.2 implements what is known as flat shading. In this model, the entire polygon is filled with the same color. The color only needs to be determined once per polygon. In other words the calculations needed to determine the color of the polygon need only be applied once. This model is very cheap in terms of CPU usage but it produces undesirable results when the object to be illuminated has curved or smooth surfaces and the edges are not to be visible. The polygons making up the sphere in figure 2.5 are easily visible when this shading method is applied. The flat shading model does not take into account the effect of the polygons surrounding the polygon in question.
2.3.2 Gouraud shading
This method aims to eliminate the visibility of the edges by interpolating the color across the polygon. The light calculations are now repeated once for every vertex of the polygon. The intensities of the colors are first interpolated along the edges and then when the polygon is scan converted from top to bottom the colors are interpolated once again. The interpolation of colors blends the colors at each of the vertices of the polygon across the whole polygon so the color transition between the edges of the polygons is smoothed out figure 2.6 illustrates Gouraud shading.
2.3.3 Phong Shading
This is the most accurate of all the shading methods. It is also the most expensive in terms of CPU usage. As with Gouraud shading, the method relies on interpolation but in this case it is not the color intensities that are being interpolated but the normal vectors themselves [5, pp42]. This implies that the color calculations are performed once per pixel. While this makes the shading method extremely accurate it also makes heavy demands on the CPU.
The smoothly shaded objects achieved by applying the last two methods of shading are due to the fact that both the methods take into account the effects of the surface normals of the polygons surrounding the polygon currently being shaded. This is illustrated in figure 2.7. The figure shows two polygons viewed edge on. Surface Normals a and b are the normals that would be used if flat shading was applied. Phong and Gouraud shading use normal c, which is obtained by adding the normal vectors a and b.
2.4 Texture Mapping
Figure 2.8 illustrates a scene from quake 1 with texture mapping disabled. The figure alongside it shows the same scene but this time texture mapping is enabled. It is immediately apparent how much richness texture mapping adds to the scene.
Texture mapping is used to 'glue' a texture onto a polygon and then make it appear as if the texture is going through the same transformations as the polygon without putting every texel through these transformations.
Up to now it was enough to describe a vertex of a polygon by three parameters: the rectangular co ordinates x, y and z. In order to texture map a polygon it is necessary to include two more parameters. These are known as the u and v co ordinates or texture co ordinates. They are used to indicate the place in the texture where the vertex in question is to get its color from. The texture itself is a two-dimensional bit map storing color information. That is why we need only two numbers to indicate a unique point within it.
The information presented in the previous paragraph is graphically illustrated in figure 2.9.
Figure 2.9 indicates that vertex zero of the polygon has u and v co ordinates of 5 and 5 respectively. When drawing this vertex on the screen, these values will be used to look up the color of the texture at 5 and 5 and the point drawn on the screen will then be drawn using this color. To determine the color of the pixels within the polygon the u and v co ordinates of the vertices are interpolated across the surface of the polygon.