Here is the link to my recent talk at Google that we discussed in class: http://www.youtube.com/watch?v=xl6PrajUws0
Homework assignment, due Thursday April 5, before class starts:
Computing the inverse viewportAs we discussed in class, the viewport transformation for ray tracing goes from image space to object space, rather than the other way around.To render the image, we loop through all pixels:
When doing ray tracing, the first thing you need to do is compute this transformation, which you can do as follows.
Given a pixel (col,row) in the image, the ray through that pixel is computed as follows:
v = (0, 0, 0)
w = [ (col - 0.5*nCols) / nCols , (0.5*nRows - row) / nCols , -focalLength ]
where normalize(vec) can be implemented as follows:norm = sqrt(vec·vec) for 0 <= i < 3 vec[i] /= normFor each pixel:To compute the surface normal at S:
Compute point v and direction w that define the ray at this pixel
Loop through all spheres in the scene.
for each sphere, compute first root, if any
visible surface at this pixel (if any) is at sphere with smallest value of t
To intersect the ray with a single sphere:
Let P = [ wx t + vx , wy t + vy , wz t + vz ] (equation 1)
Points P on a sphere of radius r at center C is:
(P - C)2 - r2 = 0 (equation 2)
Plugging equation 1 into equation 2, we get:
(wx t + (vx - cx))2 + (wy t + (vy - cy))2 + (wz t + (vz - cz))2 - r2 = 0
Rearranging terms, we get:
(w·w) t2 + 2 w·(v-c) t + (v-c)·(v-c) - r2 = 0
Which gives coefficients for a quadratic equation in t:
A = w·w (which is 1.0), B = 2 w·(v-c), C = (v-c)·(v-c) - r2
t = -w·(v-c) ± sqrt ( (w·(v-c))2 - (v-c)·(v-c) + r2)
The ray hits the sphere iff the discriminant is non-negative.
The first of the two roots is where the ray enters the sphere.
Plug in this t into equation 1 to get the surface point S.The surface normal for a sphere at center C and radius r is simply given by:The reflection vector R:
N = (S - C) / rTo do both Phong shading and to create reflection rays, you will need to compute the reflection vector R.The Phong algorithm is as follows:
Given the direction w along the ray, and the surface normal N, you can compute R as follows:
R = w - 2(N·w) NArgb + ∑i Irgb ( Drgb (Li · N) + Srgb (Li · R)p )where we can model each light as a direction vector and an illuminance color:Light = [ Lxyz , Irgb ]Creating a Material object:
That's a total of six numbers. The first three numbers Lxyz represent the direction vector toward the light source. For now you can assume that each light source is extremely far away (like light from the Sun), so that the direction vector remains the same no matter what the location of the surface point at which we are computing Phong shading.
The last three numbers Irgb represent the rgb illuminance of the light source (how much light the source produces, in each of red, green and blue).You should create an object class Material to store material data:Creating reflections:For now, Material will have 10 phong algorithm values:
plus, in the case of ray tracing, the "mirror color" mcrgb, with 3 values:
- Argb (ambient color, with 3 values)
- Drgb (diffuse color, with 3 values)
- Srgb (specular color, with 3 values)
- p (specular power, with 1 value)If mcrgb is black (that is, [0,0,0]), there is no mirror reflection.
If mcrgb is white (that is, [1,1,1]), the surface acts like a perfect mirror.
If mcrgb is red (that is, [1,0,0]), the surface acts like a mirror tinted red.If the mirror color for this surface is not black, continue tracing the ray path recursively by using the reflection vector R as the direction of the reflected ray, and by moving S by some small amount ε in that direction, to compute the origin of the reflected ray:
v' = S + ε R
w' = R
Return a mixture of Phong shading and the color from the reflected ray:for (int i = 0 ; i < 3 ; i++) color[i] = phong[i] * (1.0 - mc[i]) + reflection[i] * mc[i];
Implement a simple ray tracer, as per the above notes.
You should create an original scene consisting of spheres.
Your scene should demonstrate both Phong shading and mirror reflection.
A note about camera origin:In my notes above, I placed the camera origin v at (0,0,0). This was a completely arbitrary decision on my part. If you use this camera origin, then in order to see your spheres you would need to place them in -z.
If you'd like, you can instead place the camera origin at a positive z value, say at (0,0,focalLength); Then you can just cluster your spheres around (0,0,0), which you might find easier.
For extra credit:
Implement shadows, which you can do as follows:For each of the light sources in the Phong equation,
- Shoot a ray from S to that light source.
- If the ray hits any other object, then
do not add the diffuse and specular components from that light when computing Phong shading.