Non-Photorealistic Rendering Techniques for a Game Engine

Adrian Ilie

Department of Computer Science

University of North Carolina at Chapel Hill

1

Abstract

Many interactive 3D applications’ visual styles can be changed to new, different, and interesting visual styles without modifying their source code. This is done by intercepting the OpenGL graphics library and changing the drawing calls. Two of the most interesting visual styles are sharp features and cartoonstyle rendering.

Sharp features convey a great deal of information with very few strokes. Technical illustrations, engineering CAD diagrams as well as non-photo-realistic rendering techniques exploit these features to enhance the appearance of the underlying graphics models.

Cartoonstyle rendering attempts to emulate the work of artists in animation movies. Cartoon characters are intentionally “two-dimensional”: cartoonists typically use solid colors that do not vary across the materials they represent.

This project presents a synergy of these approaches: intercepting and replacing OpenGL library calls to modify the visual style of an application [1], rendering special features of polygonal models by introducing new polygons [2], and rendering the scene in cartoon style [3,4,5].

1. Introduction.

Many different visual styles are possible for interactive 3D applications. However, most applications’ visual styles are tightly coupled to the applications themselves, and prototyping new visual styles and experimenting with different visual styles is difficult.

Figure 1: A stylized rendering (from [1]).

The goal of our project is to explore varying visual styles of an existing application, by altering its visual style without major modifications to its source code. To achieve these alterations, we are limited to intercepting the output of the application at a common level: calls to the graphics library.

The challenge is that the only information received from the application is low-level drawing commands and primitives, and this precludes many current stylized rendering techniques. In the absence of special data structures, recovering connectivity information requires random traversals of the scene graph. Maintaining this information also increases the memory requirements.

One way to solve this problem is to devise methods of gathering more highlevel information by extracting and maintaining state information at the drawing library level. The traditional approach is to reconstruct and traverse the scene polygon graph, then decide on the desired rendering attributes for each polygon. However, this is a cumbersome process, andusually not supported by rendering APIs or hardware.

Another solution is to render new visual styles without making use of connectivity information. This is accomplished by introducing new polygons with appropriate color, shape and orientation, and by using special shading and texturing techniques. We illustrate these techniques for special features and cartoonstyle rendering.

Special features such as silhouettes and sharp ridges of a polygonal scene are usually displayed by explicitly identifying and then rendering “edges”, using connectivity information. These features can also be displayed without connectivity information if new polygons with appropriate color, shape and orientation are introduced based only on the information at the vertices of the existing polygons. We illustrate this technique in Section 3.

Another interesting NPR area is cartoonstyle rendering. This effect can also be implemented noninvasively, using just local information and 1D textures. We illustrate this technique in Section 4.

While our method may not produce imagery to rival state of the art non-photorealistic rendering systems, it can be dynamically applied to a realtime rendering application: a game engine.

2. Intercepting OpenGL calls.

In [1], the authors present a general method to replace the system's OpenGL library with a custom library that implements the standard interface and calls the real system library when needed. The real library is dynamically loaded and a name mapping mechanism is provided.

In this project, we use a slightly less general yet conceptually similar approach. The application we modify the visual style of is a well-known shareware game, Quake. The authors of [1] replaced all the calls in the source code to calls to rendering libraries that are loaded dynamically. We use this pre-existing structure to plug in an implementation of the NPR techniques described in [2,3,4,5]. The next sections provide a short description of the rendering process.

3. Rendering sharp features.

The most commonly used features are silhouettes, ridges and intersections. For polygonal meshes, the silhouette edges consist of visible segments of all edges that connect back-facing polygons to front-facing polygons. A crease edge is a ridge if the dihedral angle between adjacent polygons is less than a threshold. A crease edge is a valley if the angle is greater than a threshold (usually a different, and larger one). An intersection edge is the segment common to the interior of two intersecting polygons.

Figure 2: Sharp features: silhouettes (i), ridges (ii), valleys (iii), and their combination (iv) (from [2]).

In this project, we only implemented silhouettes and ridges. Due to the fact that the models are coarsely tessellated, creases provide very little detail.

We assume that the scene consists of oriented convex polygons. This allows us to distinguish between front and back-facing polygons for silhouette calculations, and ensure correct notion of dihedral angle between adjacent polygons.

3.1. Silhouettes.

The basic idea in our approach is to enlarge each back-facing polygon so that the projection of the additional part appears around the projection of the adjacent front-facing polygon, if any. If there is no adjacent front-facing polygon, the enlarged part of the back-facing polygon remains hidden behind existing front-facing polygons. The normal of the back-facing polygon is flipped to ensure that it is not culled during back-face culling. To achieve a given width in the image space, the degree of enlargement for each back-facing polygon is controlled, depending on its orientation and distance with respect to the camera.

Figure 3: Silhouettes as extensions of back-facing polygons (from [2]).

3.2. Ridges.

For ridges, we modify each front-facing polygon. We want to display in black the visible part of each edge for which the dihedral angle between adjacent polygons is less than or equal to a user selectable global threshold , superimposed on the original model if desired.

We add black colored quadrilaterals (or quads for short) to each edge of each front-facing polygon. The quads are oriented at angle  with respect to the polygon as seen in Figure 4(ii) and (iii). The visibility of the original and the new polygons is performed using the traditional depth buffer. As shown in Figure 4(iv), at a sharp ridge, the appropriate ‘edge’ is highlighted.

Figure 4: Ridges. (i) Front-facing polygons, (ii) and (iii) black quads at threshold angle  are added to each edge of each front-facing polygon, (iv) at a sharp ridge, the black quads remain visible (from [2]).

When the dihedral angle is greater than , the added quadrilaterals are hidden by the neighboring front-facing polygons. Figure 5(i) and (ii) show new quadrilaterals that remain hidden after visibility computation in 4(iii).

Figure 5: Ridge without sharp angle. (i) and (ii) Black quads are added, (iii) the quads remain invisible after rasterization (from [2]).

4. Cartoon-style Rendering.

Cartoon characters are intentionally “2D”. Animators deliberately reduce the amount of visual detail in their drawings in order to draw the audience into the story and to add humor and emotional appeal. Rather than shading the character to give it a three-dimensional appearance, the cartoonist typically uses solid colors that do not vary across the materials they represent.

Often the artist will shade the part of a material that is in shadow with a color that is black or a darkened version of the main material color. This helps add lighting cues, as well as cues to the shape and context of the character in a scene. The boundary between shadowed and illuminated colors is a hard edge that follows the contours of the object or character.

Another cue used by cartoonists is highlighting small areas whit white or a lightened version of the main material color. This achieves an effect similar to specular highlighting, but the boundary is a hard edge just as in the previous case.

The result is similar to the character shown in Figure 6 below, showing both the dark areas, as well as the highlights.

Figure 6: Olaf, rendered in cartoon style

(adapted from [4]).

To achieve this effect, we need a new illumination model, in which the light values are not smoothly interpolated across the surface. This is accomplished by using a 1D texture map and assigning the texture coordinate at each vertex to a number proportional to the amount of light the vertex receives.

Figure 7: Generation of texture coordinates from the amount of light a surface receives

(adapted from [4]).

To simulate these

Bibliography.

[1] Alex Mohr, Michael Gleicher: “Non-Invasive, Interactive, Stylized Rendering”. The 2001 ACM Symposium on Interactive 3D Graphics.

[2] Ramesh Raskar: “Hardware Support for Non-photorealistic Rendering”, Eurographics 2001.

[3] Bert Freudenberg, Maic Masuch, Thomas Strothotte: “Walk-Through Illustrations:

Frame-Coherent Pen-and-Ink Style in a Game Engine”, Siggraph/Eurographics Graphics Hardware, LA, 2001.

[4] Adam Lake, Carl Marshall, Mark Harris, Marc Blackstein: “Hardware Support for Non-photorealistic Rendering”, Siggraph/Eurographics Graphics Hardware, LA, 2001.

[5] Jeff Lander: “Shades of Disney: Opaquing a 3D World”, Game Developer Magazine, March 2000.

1