Page 10-4: Lighting in Shaders

Page 10-4: Lighting in Shaders

Box 1: Lighting in Shaders

The first thing we usually need to do in a shader is compute lighting. The simple shaders from Page 2 didn't have lighting (so all sides of the cube looked the same).

We discussed the equations for a simple lighting model (Phong) in class. You can find the shader code for this all over the web and even in some of the required readings.

If you recall, in order to compute lighting at a point, we need to know:

  1. The local geometry (really just the normal vector)
  2. Information about the surface property (such as its color)
  3. Information about the lights (color, intensity, direction)
  4. Information about the camera (so we have the eye direction for specular computations)

The geometry (#1) is different for every point - we'll need to pass it to the shader as a varying variable.

Information about the surface is constant for the object, it goes into uniform variables. We could pass per-vertex colors, or do a texture lookup (in which case the texture is a uniform - but we'll get to that later).

Information about the lights is constant for the scene, we can either pass it as a uniform variable, or hard code it into the shaders.

Box 2: Simple Lighting

Let's try a simple example. We'll make a purely diffuse surface lit by a single directional light source. The lighting equation is:

Where c is the resulting light color, is the surface color, is the light color, is the unit normal vector, and is the unit light vector (the direction the light comes from).

This is quite simple in code. To make it even simpler, I will assume that is white.

In the vertex shader, we can do everything as we have been, except that now we have to pass the normal vector. There is one catch: the normal vectors are in the object's local coordinate system. Just as we transform the object's positions by the "model" matrix to get it into the "world" coordinates, we need to provide a similar transformation to the normals. It turns out that if you transform an object by a matrix M, you have to transform its normals by a different matrix N (which is the adjoint or inverse-transpose of M). The math for this is discussed in Section 6.2.2 of Fundamental of Computer Graphics (FCG4_Ch06.pdf). Don't worry too much - THREE knows about normal matrices.

So, when we transform the vertex to get its final position, we also transform the normals using the normalMatrix that THREE gives us. There is one slight catch: notice that we transform the position by modelViewMatrix because we need to know where the vertex is going to end up in view coordinates (we need both the modeling matrix and the viewing matrix). The normalMatrix in THREE is similar: it tells us what direction the normal will be pointing in view (not world) coordinates. This is documented in the WebGlProgram page.

So, our vertex program (which is in diffuse1.vs - with comments) looks like:

varying vec3 v_normal;
void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
    v_normal = normalMatrix * normal;
}

Again, notice how we need to declare a varying variable, and that we have to compute the transformed normal (that is transformed the same way the the object is). Also notice that the normal is not transformed by the projection: we don't want the lighting affected by perspective.

The action happens in the fragment shader which computes the lighting equation.

varying vec3 v_normal;
const vec3 lightDir = vec3(0,0,1);
const vec3 baseColor = vec3(1,.8,.4);

void main()
{
    // we need to renormalize the normal since it was interpolated
    vec3 nhat = normalize(v_normal);
    // deal with two sided lighting
    float light = abs(dot(nhat, lightDir));
    // brighten the base color
    gl_FragColor = vec4(light * baseColor,1);
}

Let's discuss this part by part.

First, I declare some "global" variables. I declare the varying variable to receive normal information from the vertex shader. I also declare two constants: the light direction vector lightDir and the surface color baseColor - these correspond to and in the equation.

In the shader itself, the first thing I do is compute nhat (which is ). I need to renormalize the vector: because the fragment normal is computed by linear interpolation of the vertex normals, it may no longer be unit length (even if the vertex normals were unit length).

The I compute the dot product - just as in the equation. One slight deviation: I take the absolute value of this, so if the normal is facing inward I still get the same lighting. This makes sure things work for two sided lighting.

Finally, I use this brightness amount to change the color.

There is a hidden trick here: the normal vector is in the view (or camera coordinate) system. The Z axis is perpendicular to the image plane (basically, pointing towards the camera). If you look at the results, you'll see its as if the light is where the camera is (notice how the light on the sphere is brightest at the part that points towards the camera). You should also notice that although this is diffuse lighting, it changes as the camera moves (because the light is moving with the camera).

The JavaScript 4-1-diffuse.js is similar to the previous examples, but make sure you understand the shaders diffuse1.vs and diffuse1.fs before going on.

Box 3: Light Parameters and Camera Coordinates

Usually, we like to think about lights in "world coordinates", not coordinates that move with the cameras. So the previous example is inconvenient. In box 1, the light was attached to the camera. If we wanted to have the light defined in the world (for example, we would like to have the light coming from straight above - (0,1,0) - as if it were the sun at noon, or a light in the ceiling), we're stuck.

It turns out this is a common problem. In many graphics systems, there is no notion of the "world coordinates" - there are just camera coordinates. All other coordinate systems are up to the programmer. The fact that we have "world coordinates" is our own convention.

There are a few things we could do, here are two general approaches:

  1. We could compute the normals in world coordinates. Unfortunately, while THREE gives us normalMatrix which is the adjoint of the modelViewMatrix, it has no equivalent pre-defined uniform for the adjoint of the modelMatrix. We have to compute it ourselves, and make our own uniform variable.

  2. We could transform the lights into view coordinates by transforming them by the viewing matrix. This is actually what THREE (and most graphics systems) do.

Let's try both approaches and make a light from vertically above (with the same diffuse material).

We'll try approach #2 first: transforming the lights. The simplest thing to do would be to apply the view transformation in the fragment shader, re-writing it as...

varying vec3 v_normal;
const vec3 lightDirWorld = vec3(0,1,0);
const vec3 baseColor = vec3(1,.8,.4);

void main()
{
    // we need to renormalize the normal since it was interpolated
    vec3 nhat = normalize(v_normal);

    // get the lighting vector in the view coordinates
    // warning: this is REALLY wasteful!
    vec3 lightDir = normalize( viewMatrix * lightDirWorld );

    // deal with two sided lighting
    float light = abs(dot(nhat, lightDir));

    // brighten the base color
    gl_FragColor = vec4(light * baseColor,1);
}

This works (note how the light comes from above, so the square is dark):

Notice that because I am doing "two sided" lighting (with that abs), the light comes both from above and below (the top and bottom of the sphere are lit).

The downside is this is really inefficient. We are doing a matrix multiply to change the light direction once for every fragment. That's a lot of work - that we don't need to be doing.

The alternative would be to make the light direction a uniform variable. The problem with this is that when we create uniform variables, we don't know what the camera will be (or have the view matrix). For THREE's built in lights, this is implemented in the render loop so that the appropriate light directions are computed just before rendering when the view matrix is known. THREE provides mechanisms for performing these kinds of "pre-rendering" computations, but we won't discuss them.

We could use a similar strategy to define our own "model matrix adjoint" uniform, we would need to recompute it every time the model matrix changed. Again, THREE has ways to do this, but we aren't going to take time to learn about them.

Box 4: Using THREE's Lights

Of course, to really do things correctly and make them blend into our scenes, we should use the lights that are defined in the THREE scene so our objects using our shaders have the same lighting as those using THREE's shaders.

Doing this requires:

  1. Setting up uniforms that receive information about THREE's lights. Fortunately, THREE will set this up for us. We just need to use some poorly documented parts of THREE (the UniformsLib).
  2. In our Shaders, we need to loop over all of the lights and sum up their contributions.

The upside is that THREE gives lighting information in view space, so the issues discussed in Box 3 are taken care of.

Things get even trickier if we want to do shadows.

We will not require you to figure out how to use THREE's lights in a shader - it will be sufficient for the exercises (future pages) to make a simple directional light source in camera coordinates. However, you can make your shaders work with THREE's lights for bonus points.

Summary: Lighting in Shaders

Short version: we'll let THREE take care of it. We might want to do a little simple lighting to add to our more interesting shaders (next).

On the next page we'll try something more interesting.