2003.06.26

Visual detail is often conveyed using clever lighting algorithms to simulate a variety of physical properties. The basis of these techniques lies in estimating the amount of light energy being transmitted, reflected or absorbed at a given location. All such methods are derived from the rendering equation, a very important contribution to the field of computer graphics by James T. Kajiya in 1986. Here, we (re)introduce the original rendering equation as it was presented decades ago [47].

Kajiya reworked the idea of radiative heat transfer [102] to make it more suited for computer graphics. What he introduced was

where I(x, x') is the intensity of light passing from x' to x. g(x, x') is the the geometry term, ε(x, x') is the intensity of emitted light from x' to x, and ρ(x, x', x'') is the intensity of light from x'', scattered by surface patch x' before finally striking onto x. The integral is evaluted over S, which is the volume that includes all participating surfaces.

The intensity of light that travels from point x' to point x assumes there are no surfaces inbetween to deflect or scatter the light. I(x, x') is that energy of radiation per unit time per unit area of source dx' per unit area dx of the target. In many cases, computer graphicists do not deal with joules of energy when talking about intensity of light. Instead, more descriptive terms are used. White, for example, is considered a hot (or high intensity) colour while deep blues, purples and very dark shades of grey are cool (or low intensity) colours. Once all calculations are done, the numerical value of I(x, x') is usually normalised to the range [0.0, 1.0]. The energy that reaches x is given by the following equation

The quanity g(x, x') represents the occlusion between point x' and point x. The value of g(x, x') is exactly zero if there is no straight line-of-sight from x' to x and vice versa. From a geometric standpoint this makes perfect sense. If the geometry of the scene is such that no light can travel between two points, then whatever illumination that x' provides cannot be absorbed and/or reflected at x. If there is, however, some mutual visibility between the two points, g(x, x') is equal to 1/(r*r) where r is the distance from x' to x.

The amount of energy emitted by a surface at point x' reaching a point x is measured in per unit time per unit area of source per unit area of target. This sounds very similar for the units of transport intensity I. The difference however, is that emittance is also a function of the distance between x' and x. So,

Surfaces are often illuminated indirectly. That is, some point x receives scattered light from point x' that originated from x''. The scattering term ρ is a dimensionless quantity. The energy arriving at x is

As presented above, evaluating the integrated intensity for each point on a surface is a very expensive operation. Monte Carlo methods can be employed but such compute intensive tasks are not well suited for real-time applications. Instead, simplified equations for I(x, x') are used for indoor lighting models.

[47] Kajiya, James T., The Rendering Equation, ACM Computer Graphics, SIGGRAPH 1986
Proceedings, 20(4):143-150

[102] Modest, M., Radiative Heat Transfer 2nd edition, Academic Press, 2003, ISBN
0125031637

2003.08.08

Cel shading is a common nonphotorealistic rendering (NPR) technique. Objects are drawn using a very limited colour palatte. Transitions in colour are often abrupt so roundness is implied. The silhouette and any creases are traced with thick, dark lines. Raskar developed a cel shading technique that does not require any connectivity information among the polygons of a given mesh [103]. The same shading rules are applied to every polygon. Only the vertex positions, vertex normals and the current viewing direction are used. The method assumes the mesh consists of oriented, convex polygons. The method can also be easily integrated with any hardware-accelerated rendering pipeline.

The silhouette of a mesh is generated by enlarging back-facing polygons. Parts of the enlarged polygon remain hidden behind adjacent front-facing polygons.

The enlargement process requires several known quantities: the view vector v, the camera axis vector c, the normal n of the polygon and the orientation of each edge e. Vector c is perpendicular to the image plane. The distance from a given vertex to the camera is the scalar z.

For an n-sided polygon with vertices pi, i = 0, 1 ... n-1 and edges eij, j = (i+1)%n, 'pushing' the edges outward creates a 2n-sided polygon. Vertices of the enlarged polygon include those generated by the following equation

where

and w sil controls the enlargement of the back-facing polygon.

The ridges of a mesh are generated by modifying front-facing polygons. Here, we add a series of black quads, at some threshold angle θ, to each edge of the front-facing polygon. Typical values of θ are in the range (0, 180°).

The vertices of the quads include those computed as follows

where

and

w ridge controls the thickness of the quads.

The silhouette and sharp ridges can be rendered in one pass. The pseudocode is

for each polygon { if front-facing { render polygon white for each edge of the polygon attach black quad at angle θ } if back-facing enlarge and render in black }

The valleys of a mesh are also generated by modifying front-facing polygons. Here, we add a series of black quads, at some threshold angle φ, to each edge of the front-facing polygon. Typical values of φ are in the range (180, 360°). Rendering valleys, however, require slightly more complex logic [103].

As paraphrased from the GNU General Public License "the following instances of program code are distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose." Please do not hesitate to notify me of any errors in logic and/or semantics.

The following are very basic code fragments, not a full implementation.

typedef struct vec { double x, y, z, w; } Vector; typedef struct vtex { double x, y, z; } Vertex; typedef struct tri { Vertex p[3]; Vector e[3]; // edges Vector n; // normal } Triangle; Vector cross(Vector a, Vector b) { Vector c; c.x = a.y*b.z - a.z*b.y; c.y = a.z*b.x - a.x*b.z; c.z = a.x*b.y - a.y*b.x; return c; } double dot(Vector a, Vector b) { return (a.x*b.x + a.y*b.y + a.z*b.z); }

[93] Lake, Adam, Carl Marshall, Mark Harris, Marc Blackstein, International
Symposium on Nonphotorealistic Animation and Rendering (NPAR) 2000, 13-20

[95] Markosian, Lee, Lichael A. Kowalski, Samuel J. Trychin, Lubomir D. Bourdev,
Daniel Goldstein, and John F. Hughes, Real-Time Nonphotorealistic Rendering,
Brown University, NSF Science and Technology Centre, Computer Graphics and
Scientific Visualisation, 1997

[103] Raskar, Ramesh, Hardware Support for Non-photorealistic Rendering,
ACM Computer Graphics, SIGGRAPH 2001 Proceedings, 35(4):41-47