Unit - 5

Illumination and Shading

Introduction

Realistic displays of a scene are obtained by generating perspective projectionsof objects and by applying natural lighting effects to the visible surfaces.An illumination model, also called a lighting model and sometimes referredto as a shading model, is used to calcurate the intensity of light that weshould see at a given point on the surface of an object. A surface-rendering algorithmuses the intensity calculations from an illumination model to determine thelight intensity for all projected pixel positions for the various surfaces in a scene.Surface rendering can be performed by applying the illumination model to everyvisible surface point, or the rendering can be accomplished by interpolating intensitiesacross the surfaces from a small set of illumination-model calculations.

Scan-line, image-space algorithms typically use interpolation schemes, while raytracingalgorithms invoke the illumination model at each pixel position. Sometimes,surface-rendering procedures are termed surface-shading methods.

Shading is a HARD problem.

Creating a virtual reality (say, a classroom) of a real scene, involves:

•modeling and positioning of several complex objects.

determine the VSD, project the view w.r.t. the viewer

•Obtain shading using surface normals, surface properties and light sources.

•Obtain shadows from occlusion

In a real world environment, light rays flow in almost infinite directions, some direct from the source and some reflected from shiny surfaces of objects. A real world image taken using a digital camera will only capture a small subset of the light rays (or light energy) passing through a small area.

To accurately construct a picture of this room via computer graphics, we have to simulate this illumination process and be able to calculate the shading at each point of each surface in our scene.

Ambient Light

A surface that is not exposed directly to a light source still will be visible itnearby objects are illuminated. In our basic illumination model, we can set a generallevel of brightness for a scene. This is a simple way to model the combinationof light reflections from various surfaces to produce a uniform illuminationcalled the ambient light, or background light. Ambient light has no spatial or directionalcharacteristics. The amount of ambient light incident on each object is aconstant for all surfaces and over all directions.

We can set the level for the ambient light in a scene with parameter I,, andeach surface is then illuminated with this constant value. The resulting reflectedlight is a constant for each surface, independent of the viewing direction and thespatial orientation of the surface. But the intensity of the reflected light for eachsurface depends on the optical properties of the surface; that is, how much of theincident energy is to be reflected and how much absorbed

Diffuse Reflection

Ambient-light reflection is an approximation of global diffuse lighting effects.Diffuse reflections are constant over each surface in a scene, independent of theviewing direction. The fractional mount of the incident light that is diffusely reflected can be set for each surface with parameter kd, the diffuse-reflection coefficient,or diffuse reflectivity. Parameter kd is assigned a constant value in the interval0 to 1, according to the reflecting properties we want the surface to have. Ifwe want a highly reflective surface, we set the value of kd near 1. This produces abright surface with the intensity of the reflected light near that of the incidentlight. To simulate a surface that absorbs most of the incident light, we set the reflectivityto a value near 0. Actually, parameter kd is a function of surface color,but for the time being we will assume kd is a constant.

If a surface is exposed only to ambient light, we can express the intensity ofthe diffuse reflection at any point on the surface as Since ambient light produces a flat uninteresting shading for each surface,scenes are rarely rendered with ambient light alone. At least one lightsource is included in a scene, often as a point source at the viewing position We can model the diffuse reflections of illumination from a point source in asimilar way. That is, we assume that the diffuse reflections from the surface ares c a t t e d with equal intensity in all directions, independent of the viewing dimension.

Such surfaces are sometimes referred to as ideal diffuse reflectors. They arealso called Lnmbertian reflectors, since radiated light energy from any point on thesurface is governed by Imrnberl's cosine law. This law states that the radiant energyfrom any small surface area dA in any direction & relative to the surface normalis proportional to cos θ

POLYGON-RENDERING METHODS

Consider the application of an illumination model to the renderingof standard graphics objects: those formed with polygon surfaces. The objectsare usually polygon-mesh approximations of curved-surface objects, butthey may also be polyhedra that are not curved-surface approximations. Scanlinealgorithms typically apply a lighting model to obtain polygon surface renderingin one of two ways. Each polygon can be rendered with a single intensity,or the intensity can be obtained at each point of the surface using an interpolationscheme.

Gouraud Shading

This inteneity-interpolation scheme, developed by Gouraud and generally referredto as Gouraud shading, renders a polygon surface by linearly interpolatingintensity values across the surface. Iqtensity values for each polygon arematched with the values of adjacent polygons along the common edges, thuseliminating the intensity discontinuities that can occur in flat shading.

Each polygon surface is rendered with Gouraud shading by performing thefollowing calculations:

  • Determine the average unit normal vector at each polygon vertex.
  • Apply an illumination model to each vertex to calculate the vertex intensity.
  • Linearly interpolate the vertex intensities over the surface of the polygon.

At each polygon vertex, we obtain a normal vector by averaging the surfacenormals of all polygons sharing that vertex, as illustrated in Fig.Thus, forany vertex position V, we obtain the unit vertex normal with the calculation.

interpolating intensities along thepolygon edges. For each scan line, the intensity at the intersection of the scan linewith a polygon edge is linearly interpolated from the intensities at the edge endpoints.

For the example in above Fig., the polygon edge with endpoint vertices atpositions 1 and 2 is intersected by the scan line at point 4. A fast method for obtainingthe intensity at point 4 is to interpolate between intensities I, and I, usingonly the vertical displacement of the scan line.

Similarly, intensity at the right intersection of this scan line (point 5) is interpolatedfrom intensity values at vertices 2 and 3. Once these bounding intensitiesare established for a scan line, an interior point (such as point p in above Fig.) isinterpolated from the bounding intensities at points 4 and 5 as

Incremental calculations are used to obtain successive edge intensity valuesbetween scan lines and to obtain successive intensities along a scan line. Asshown in Fig., if the intensity at edge position (x, y) is interpolated as

then we can obtain the intensity along this edge for the next scan line, y - I, as

Similar calculations are used to obtain intensities at successive horizontal pixelpositions along each scan line.

When surfaces are to be rendered in color, the intensity of each color componentis calculated at the vertices. Gouraud shading can be combined with ahidden-surface algorithm to fill in the visible polygons along each scan line.

Phong Shading

A more accurate method for rendering a polygon surface is to interpolate normalvectors, and then apply the illumination model to each surface point. Thismethod, developed by Phong Bui Tuong, is called Phong shading, or normalvectorinterpolation shading. It displays more realistic highlights on a surfaceand greatly reduces the Mach-band effect.

A polygon surface is rendered using Phong shading by carrying out the followingsteps:

  • Determine the average unit normal vector at each polygon vertex.
  • Linearly interpolate the vertex normals over the surface of the polygon.
  • Apply an illumination model along each scan line to calculate projected

pixel intensities for the surface points.

Interpolation of surface normals along a polygon edge between two verticesis illustrated in Fig.

The normal vector N for the scan-line intersectionpoint along the edge between vertices 1 and 2 can be obtained by vertically interpolatingbetween edge endpoint normals:

Incremental methods are used to evaluate normals between scan lines and alongeach individual scan line. At each pixel position along a scan line, the illuminationmodel is applied to determine the surface intensity at that point.Intensity calculations using an approximated normal vector at each pointalong the scan line produce more accurate results than the direct interpolation ofintensities, as in Gouraud shading. The trade-off, however, is that Phong shadingrequires considerably more calculations.

OpenGL

The OpenGL standardised access to hardware, and pushed the development responsibility of hardware interface programs, sometimes called device drivers, to hardware manufacturers and delegated windowing functions to the underlying operating system. With so many different kinds of graphic hardware, getting them all to speak the same language in this way had a remarkable impact by giving software developers a higher level platform for 3D-software development.

In 1992, SGI led the creation of the OpenGL architectural review board (OpenGL ARB), the group of companies that would maintain and expand the OpenGL specification for years to come. OpenGL evolved from (and is very similar in style to) SGI's earlier 3D interface, IrisGL. One of the restrictions of IrisGL was that it only provided access to features supported by the underlying hardware. If the graphics hardware did not support a feature, then the application could not use it. OpenGL overcame this problem by providing support in software for features unsupported by hardware, allowing applications to use advanced graphics on relatively low-powered systems.

In 1994, SGI played with the idea of releasing something called "OpenGL++" which included elements such as a scene-graph API (presumably based on their Performer technology). The specification was circulated among a few interested parties – but never turned into a product.[10]

Microsoft released Direct3D in 1995, which would become the main competitor of OpenGL. On December 17, 1997,[11] Microsoft and SGI initiated the Fahrenheit project, which was a joint effort with the goal of unifying the OpenGL and Direct3D interfaces (and adding a scene-graph API too). In 1998, Hewlett-Packard joined the project.[12] It initially showed some promise of bringing order to the world of interactive 3D computer graphics APIs, but on account of financial constraints at SGI, strategic reasons at Microsoft, and general lack of industry support, it was abandoned in 1999.[13]

OpenGL releases are backward compatible. In general, graphics cards released after the OpenGL version release dates shown below support those version features, and all earlier features. For example the GeForce 6800, listed below, supports all features up to and including OpenGL 2.0. (Specific cards may conform to an OpenGL spec, but selectively not support certain features. For details, the GPU Caps Viewer software includes a database of cards and their supported specs)

OpenGL 1.0

Released January, 1992.
The first OpenGL specification was released by Mark Segal and Kurt Akeley.

OpenGL 1.1

Released January, 1997.
OpenGL 1.1 focused on supporting textures and texture formats on GPU hardware.

OpenGL 1.2

Released March 16, 1998.
OpenGL 1.2 focused on supporting volume textures, packed pixels, normal rescaling, clamped/edge texture sampling and image processing.
Supported GPU Cards: Rage 128, Rage 128 GL, Rage XL/XC, Rage 128 Pro, Rage Fury MAXX, and all later cards.

OpenGL 1.2.1

Released October 14, 1998
OpenGL 1.2.1 was a minor release after OpenGL 1.2 (March 16, 1998) which added multi-texture, or texture units, to the rendering pipeline. This allowed multiple textures to be blended per pixel during rasterization.

OpenGL 1.3

Released August 14, 2001.
OpenGL 1.3 added support for cubemap texture, multi-texturing, multi-sampling, and texture unit combine operations

OpenGL 1.4

Released July 24, 2002.
OpenGL 1.4 added hardware shadowing support, fog coordinates, automatic mipmap generation, and additional texture modes.

OpenGL 1.5

Released July 29, 2003.
OpenGL 1.5 added support for vertex buffer objects (VBOs), occlusion queries, and extended shadowing functions.

OpenGL 4.1

Announced 26 July 2010

OpenGL Fundamentals

This section explains some of the concepts inherent in OpenGL.

Primitives and Commands

OpenGL draws primitives—points, line segments, or polygons—subject to several selectable modes. You can control modes independently of each other; that is, setting one mode doesn't affect whether other modes are set (although many modes may interact to determine what eventually ends up in the frame buffer). Primitives are specified, modes are set, and other OpenGL operations are described by issuing commands in the form of function calls.

Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way. The only exception to this rule is if the group of vertices must be clipped so that a particular primitive fits within a specified region; in this case, vertex data may be modified and new vertices created. The type of clipping depends on which primitive the group of vertices represents.

Commands are always processed in the order in which they are received, although there may be an indeterminate delay before a command takes effect. This means that each primitive is drawn completely before any subsequent command takes effect. It also means that state-querying commands return data that's consistent with complete execution of all previously issued OpenGL commands.

Procedural versus Descriptive

OpenGL provides you with fairly direct control over the fundamental operations of two- and three-dimensional graphics. This includes specification of such parameters as transformation matrices, lighting equation coefficients, antialiasing methods, and pixel update operators. However, it doesn't provide you with a means for describing or modeling complex geometric objects. Thus, the OpenGL commands you issue specify how a certain result should be produced (what procedure should be followed) rather than what exactly that result should look like. That is, OpenGL is fundamentally procedural rather than descriptive. Because of this procedural nature, it helps to know how OpenGL works—the order in which it carries out its operations, for example—in order to fully understand how to use it.

Execution Model

The model for interpretation of OpenGL commands is client-server. An application (the client) issues commands, which are interpreted and processed by OpenGL (the server). The server may or may not operate on the same computer as the client. In this sense, OpenGL is network-transparent. A server can maintain several GL contexts, each of which is an encapsulated GL state. A client can connect to any one of these contexts. The required network protocol can be implemented by augmenting an already existing protocol (such as that of the X Window System) or by using an independent protocol. No OpenGL commands are provided for obtaining user input.

The effects of OpenGL commands on the frame buffer are ultimately controlled by the window system that allocates frame buffer resources. The window system determines which portions of the frame buffer OpenGL may access at any given time and communicates to OpenGL how those portions are structured. Therefore, there are no OpenGL commands to configure the frame buffer or initialize OpenGL. Frame buffer configuration is done outside of OpenGL in conjunction with the window system; OpenGL initialization takes place when the window system allocates a window for OpenGL rendering. (GLX, the X extension of the OpenGL interface, provides these capabilities, as described in "OpenGL Extension to the X Window System." )

Basic OpenGL Operation

The figure shown below gives an abstract, high-level block diagram of how OpenGL processes data. In the diagram, commands enter from the left and proceed through what can be thought of as a processing pipeline. Some commands specify geometric objects to be drawn, and others control how the objects are handled during the various processing stages.

Figure 1-1. OpenGL Block Diagram

As shown by the first block in the diagram, rather than having all commands proceed immediately through the pipeline, you can choose to accumulate some of them in a display list for processing at a later time.

The evaluator stage of processing provides an efficient means for approximating curve and surface geometry by evaluating polynomial commands of input values. During the next stage, per-vertex operations and primitive assembly, OpenGL processes geometric primitives—points, line segments, and polygons, all of which are described by vertices. Vertices are transformed and lit, and primitives are clipped to the viewport in preparation for the next stage.

Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer. These operations include conditional updates to the frame buffer based on incoming and previously stored z-values (for z-buffering) and blending of incoming pixel colors with stored colors, as well as masking and other logical operations on pixel values.