CSCIUA0480004

Then we went carefully over how to use the code setup that I created to make your own custom fragment shader, with a simple square as the geometry.
We also watched the MAGI Norelco commercial, which was historically significant because it may have been the first time that computer graphics was mistaken for live action.
For Thursday, September 11, your assignment is to adopt the sample code from class, which you can find as a zip file here, and to modify the fragment shader to do something fun and animated, and that responds to the user's mouse gestures in some interesting way.
We talked in more detail about GLSL. We spent some time looking at the OpenGL Quick Reference Card. The secondtolast page of that card, which lists Builtin Functions and Common Functions, is particularly useful to you right now. You can also look at the more comprehensive complete documentation for OpenGL, which is here.
We also made some improvements to the code base, which you can grab (see below). Those changes gave us a chance to talk about (1) How to create functions in the fragment shader, (2) a general idea of the Nyquist sampling theorem, and how you might use it to do proper antialiasing.
At the end of class, we watched the test that MAGI did for Walt Disney right after TRON, on combining traditional character animation with 3D animation, in the form of a scene from Where the Wild Things Are.
For this Thursday, September 11, your assignment is to adopt the sample code from class. In class today, we made some improvements to the code base, which you can find as a zip file here.
We went over in more detail how homogeneous coordinates work, and how they allow you to use "points at infinity" as direction vectors.
We also talked a bit about the use of the fourth "alpha" color channel for blending and transparency.
We talked about two different ways to deal with the same function: (1) Evaluating it at many points on its domain, and (2) Solving for where the function's value equals zero (also known as the "roots of the equation").
We began discussing the ideas behind ray tracing, and started to set up the problem of how to trace a ray from a point V = (vx,vy,vz,1) into a direction W = (wx,wy,wz,0) and seeing where (or whether) it will intersect a sphere centered at (cx,cy,cz) of radius r.
Since the surface of that sphere consists of points where (xcx)^{2} + (ycy)^{2} + (zcz)^{2}  r^{2} = 0, we will need to substitute the points on the ray into this equation  which we will do when we next meet on Tuesday, Sep 16.
At the end of the class we watched Carlitopolis by Luis Nieto.
For next Thursday, September 18, your assignment is to adopt create interesting, fun and colorful geometric shapes in your fragment shader. See if you can make triangles, rectangles, diamond shapes, hexagons, ovals, and any other interesting shapes.
Each shape should have a color.
For extra credit, see if you can get shapes to animate when the mouse is over them, or to respond to mouse clicks in some other interesting and fun way.
We are doing this assignment so that you will have more practice using programming constructs like if statements and function calls, to help prepare you for the harder problem of implementing ray tracing in your fragment shader.
All assignments should be completed before the start of Thursday's class.
In this lecture we went over the fundamental math for tracing a ray to a sphere (below), and then we watched The Centrifuge Brain Project by Till Nowak.
Rather than working everything through in three dimensions, we worked it through for the 2D case  tracing a ray in the plane to a circle:
A ray in the plane is given by (V + t W), where ray origin V is the column vector [Vx,Vy,1], and ray direction W is the unit length (that is, "normalized") relative vector [Wx,Wy,0].
So any point at a distance t along the ray has [x,y] coordinates [ Vx + t * Wx , Vy + t * Wy ].
A circle, described by center [Cx,Cy] and radius r, consists of all points [x,y] for which (x  Cx) * (x  Cx) + (y  Cy) * (y  Cy)  r * r = 0.
To get the solution (if there is one) to the intersection of the ray with the circle, we can plug the [x,y] coordinates along the ray into the circle equation. This will give us an equation where everything is constant except for t:
( Vx  Cx + t * Wx ) * ( Vx  Cx + t * Wx ) + ( Vy  Cy + t * Wy ) * ( Vy  Cy + t * Wy )  r * r = 0.
We can now separate out terms to form a quadratic polynomial in t:
t * t * (Wx * Wx + Wy * Wy) +
t * ( 2 * (Vx  Cx) * Wx + 2 * (Vy  Cy) * Wy ) +
(Vx  Cx) * (Vx  Cx) + (Vy  Cy) * (Vy  Cy)  r * r = 0
Now we can observe several things that make things simpler. For one thing, W is unit length, so (Wx * Wx + Wy * Wy) is just 1.0.
For another thing, all the other products can be expressed as inner products. So our quadratic polynomial can just be expressed as:
t * t + t * (2 * (VC)·W + ( (VC)·(Vc)  r*r) = 0
Solving via the quadratic equation we get:
t = B + sqrt ( B * B  C )
where B = (VC)·W and C = (VC)·(VC)  r*r.
We now have a way of knowing what will happen if we try to shoot this ray at this circle. The ray will miss the circle when this equation has no real roots. That is, when B * B  C < 0.
If the ray hits the circle, it will enter the circle where t = B  sqrt(B*B  C) and it will exit the circle where t = B + sqrt(B*B  C).
Notice that nothing about this equation relies on it being two dimensional. Everything here will work eaually well if we are ray tracing a three dimensional ray to a sphere.
We went over the math for how to form a ray at every pixel (below), and then we saw the historically pivotal Kitchen Scene from Jurassic Park.
Forming a ray at a pixel:
At every pixel we need to form a ray to shoot into the scene.
The origin of the ray will be the "camera", which is located along the positive z axis. The further away the camera is from the x,y plane  that is, the longer the "focal length" of the camera  the more telephoto is the view. The nearer the camera is to the x,y plane, the more wide angle the view.
If we set the focal length to some value fl, then the camera is located at point V = (0,0,fl,1).
If we shoot a ray from this camera to a pixel that goes through point (x,y,0,1) on the x,y image plane, we need to calculate the unit length direction W for this ray. We do this in two steps:
I recommend encoding each sphere as a vec4, with the first three components of the vec4 storing the center point (cx,cy,cz) of the sphere, and the fourth component storing the sphere's radius r.
I strongly recommend that you implement a function in your fragment shader that takes three arguments: a ray origin V, a ray direction W, and a vec4 containing the cx,cy,cz,r of the sphere.
Your function should return the value of t  the distance along the ray  where the nearest intersection occurs between the ray and the sphere.
If your ray misses the sphere entirely, you can return a very large positive number, such as 10000.0.
Each time your program is called, you will need to form a ray for that pixel, then trace the ray to each of the two spheres. If your ray hits both spheres, then the one "in front" is the one with the smaller value of t.
I suggest you color the background one color, and each of the two spheres a different color. Position your spheres so that the rendered scene will show one sphere with a larger value of cz (in other words, nearer to the camera) partly obscuring the other sphere.
We took a closer look at the Chalktalk research presentation tool that I've been using to teach this class.
Screenshot of Chalktalk being used to simulate a musical instrument
We had a guest lecture by Kristofer Schlachter, a Ph.D. student in the NYU Department of Computer Science. He talked about his experiences in the computer game industry, and his research into advanced ray tracing techniques using the latest features of the GPU.
Kris also went over in class a code example I made showing how to create an initialization and update function in JavaScript, and how to pass arrays from JavaScript into the fragment shader.
We then had a demo of Virtual Reality, and shared VR between multiple people, by Zhu Wang, who is a research scientist in our lab. The demo you saw was implemented by Zhu.
Your assignment, due by class on Thursday October 2, is to modify your sphere tracing scene so that it starts to make use of arrays, using my code example as a guide. There are a number of ways you can do this. One is to specify your sphere data in your JavaScript, and then pass it into your fragment shader each frame. Note that this will allow you to start doing animation logic in your JavaScript code, which is a good place for it.
The reason we are doing this is to give you practice with more powerful GPU programming features, as we continue to learn more about ray tracing.
Feel free to think of other creative ways to use arrays passed from JavaScript into the fragment shader to make your assignment more interesting.
We went over the basics of the Phong reflectance algorithm, a simple approximation to how surfaces interact with light, originally developed by BuiTong Phong. Phong reflectance consists of three components: Ambient, Diffuse and Specular.
The Ambient component uses a single color to appromimate a surface's response to the the light that is bouncing around the room.
The Diffuse component describes a perfectly diffuse Lambert reflector, which attenuates in brightness as the surface normal tilts away from the direction of a light source. The diffuse component is given by D_{rgb} N·Ldir_{i} Lrgb_{i}, where Ldir_{i} is the direction of light source L_{i} and Lrgb_{i} is the rgb color of light source L_{i}.
The Specular component approximates how light bounces off a shiny surface. Because the shiny surface is not quite mirror smooth, the reflected light spreads out. A power term p is used to vary the apparent shininess. The higher the value of p, the more shiny the surface appears. The specular component is given by S_{rgb} R · Ldir_{i} Lrgb_{i}, where R is the reflection of the viewer's direction.
In class I described the specular term slightly differently (I used the reflection of the light direction), but this variation will be more useful to you, because this way you will only need to calculate R once, and then you can keep using the same R vector for all of your light sources.
In class we built an example shader that shows part of the Phong reflectance algorithm in action. It implements only the Ambient and Diffuse components, not the Specular component. That code is here.
I also made a slight fix to the support library in that folder, which is
now called gl_lib3.js
, so that it will
print more informative error messages to the JavaScript console.
We also went over how to compute a reflection direction vector, given the vector toward an incoming direction, and the surface normal.
For example, if a view ray is coming in from incident direction I, then the outgoing reflection direction is given by 2 N (N · I)  I.
When you incorporate this into your ray tracer, incident direction I will just be the negative of your ray's W vector.
Finally, we watched Paul Debevec's seminal 1999 computer animation Fiat Lux.
Your assignment, which is due before class on Thursday Oct 16, will consist of two parts.
The first part is to extend your ray tracer so that it implements the full Phong reflectance algorithm. Your scene should have multiple spheres and multiple light sources, and the material of each sphere should have an Ambient, Diffuse and Specular component.
The second part of your assignment will be to partipate in a group project in class this coming Tuesday and Thursday (Oct 7 and Oct 9). After those sessions, try to work with the Chalktalk code that was distributed in class, to make a simple working prototype of the ideas you sketched out in class.
In our Thursday Oct 16 class we will spend some of the class time going over these sketeches and prototypes.
Reflecting rays
To create mirrorlike reflection in ray tracing, we shoot another ray, starting from the surface point S, and see whether that ray hits another object. Since the ray (V+tW) that is coming into the surface is going in direction w, then the direction of the emerging reflected ray is going to be the mirror image of W:
W' = 2 N (N . W)  W = 2 N (N . W) + WThe origin of the reflected ray is going to be just outside of the surface. A good way to find such point is to use a small value ε, such as ε = 0.001, and then use it to move slightly out of the surface:
V' = S + ε W'When you shoot this reflected ray into the scene, you can mix the resulting color together with the result of your surface's original Phong reflectance color. The more of the reflected ray color that you mix into the final color, the more "mirrorlike" will be the final appearance of the object.
Background gradient
Of course many rays will end up missing all of the objects, and these are rays that end up flying off into the background. Rather than make the background black, you can compute a color that suggests a more interesting background.
One way to do this is to create a color gradient, using the y component of the ray, so that color appears to gradually change as a function of the latitude of the background direction.
Procedural texture
You can add procedural texture to any component of the ray tracing algorithm, to make surfaces look more interesting and textured. For example, you can vary the ambient or diffuse components of your surface, based on noise(S) (where S is the surface point), to create a mottled appearance.
You can also try adding noise to vary the surface normal N, to create the appearance of a nonsmooth surface.
To generate procedural noise within your fragment shader, you can include this code into your fragment shader to implement noise, as well as a fractal sum of noise and "turbulence", which is a fractal sum of the absolute value of noise.
Your assignment, which is due before class on Thursday Oct 23, is to implement ray reflection and a background color gradient, and also to incorporate noisebased procedural texture into your scene.
You can create multiple levels of ray reflection by using a for loop in your fragment shader, but remember that the loop will actually become unrolled by the compiler, so you can only "loop" for an explicitly specified number of steps.
We started the class by going over a complete example of Phong reflectance. I've included that version of the code here.
We then went over shadows, refraction, ray tracing to planes and booleans at a high conceptual level.
The essential idea behind ray tracing shadows is that you cast a "query ray" (a ray to find out information) from the surface point S into the direction of each light source L_{i}. If the ray into a given light direction hits any other object, then S is in shadow from that light, and you should not add in either the diffuse or specular components of that light source.
Refraction can occur when light enters a transparent object, such as water, glass or plastic. When a ray of light enters a transparent material, it may slow down, and the amount that the light slows down is referred to as that material's refractive index n. For example, if n = 1.5, that means that light is traveling only 2/3 as fast as it travels in a vacuum. If C is the speed of light in a vacuum, then the speed of light in a medium of refractive index n is given by (C / n).
At the surface between two transparent media (such as air and glass), light will bend, or refract. On Thursday's lecture we will go over this in more detail.
Up until now the only shape that we ray traced to has been a sphere. We can ray trace to any shape whose surface can be described mathematically. For example, we can ray trace to any plane, using the general linear equation for a plane: ax + by + cd + d = 0. Note that this linear equation is described by a vector P with four coefficients (a,b,c,d), and can be thought of as an inner product: P · X, where X = (x,y,z,1).
Given a ray X = (V + t * W), we can find the solution for P · X = 0 the same way we did for spheres: by substituting V+tW into the equation.
This gives us: P · (V + t * W) = (P · V) + t * (P · W) = 0
From this, it is easy to see the solution: t = (P · V) / (P · W)
In a sense, this equation defines the surface of an infinite half space volume. The set of points X for which P · X is negative is the "inside" of this half space, and the set of points X for P · X is positive is its "outside".
The surface normal of the plane is the same everywhere, and is given by Normal(P) = normalize(a,b,c).
We can take boolean intersections of half spaces to create finite shapes, such as cubes. For example, a unit cube is defined as the intersection of six half spaces (two to bound x, two more bound y, and another two to bound z).
If we shoot a ray (V + t * W) to a shape that is defined as the intersectino of a set of half spaces P_{i}, we need to do two things:
An exiting plane is one where the ray exits the half space. This will occur when the surface normal points away from the ray origin. In other words, when Normal(P_{i}) · W > 0.
If tI < tO, then the ray has intersected the shape.
If tI > tO, then the ray has missed the shape.
At the end of this class, we watched Bruce Branit's iconic 2007 short film Worldbuilder.
In this class we went over refraction in a bit more detail. In particular, we reviewed Snell's Law, which describes exactly how much light bends when it crosses from a medium with index of refraction n1 to a medium with index of refraction n2. Snell's Law is given by:
n1 * sin(θ_{1}) = n2 * sin(θ_{2})where θ_{1} is the angle of deviation from the surface normal of the entering ray, and θ_{2} is the angle of deviation from the surface normal of the exiting ray.
We also looked a bit more closely at booleans of other shapes, such as spheres. For example, if you want to render a flying saucer shape, you can ray trace the intersection of two spheres. Along any given ray, the first sphere will have roots I1 and O1 where it enters and exits, respectively. The second sphere will have roots I2 and O2 where it enters and exits, respectively.
So the segment along the ray which describes the intersection of the two spheres is given by:
tI = max(I1, I2)From this, you can follow the same rule for determining whether the ray has intersected the shape:
tO = min(O1, O2)
If tI < tO, then the ray has intersected the shape.
If tI > tO, then the ray has missed the shape.
Finally we covered six useful primitive operations for linear transformation in three dimensions: Identity, Translation, X Rotation, Y Rotation, Z Rotation and Scale.
The key to each of these is to define a Matrix class, which stores a 4×4 matrix of values.
For the Identity operation, we set this matrix to:
For each of the other five primitive operations, we first create an internal transformation matrix of values, and then we do a matrix multiply to modify the values in our Matrix object.
identity() 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
Here are the respective transformation matrices:
and then (for the last five operations), to multiply this matrix by an existing 4×4 matrix.
translate(a,b,c) 1 0 0 a 0 1 0 b 0 0 1 c 0 0 0 1 rotateX(a) 1 0 0 0 0 cos(a) sin(a) 0 0 sin(a) cos(a) 0 0 0 0 1 rotateY(a) cos(a) 0 sin(a) 0 0 1 0 0 sin(a) 0 cos(a) 0 0 0 0 1 rotateZ(a) cos(a) sin(a) 0 0 sin(a) cos(a) 0 0 0 0 1 0 0 0 0 1 scale(a,b,c) a 0 0 0 0 b 0 0 0 0 c 0 0 0 0 1
You should also be able to call scale(a) with only one argument, to effect uniform scaling. In your implementation, check to see whether the second argument b is undefined. If so, then set both b and c equal to a.
Finally, it is necessary to apply the linear transformation to points in space, which requires implementing a function matrix.transform(point).
Because matrix multiplication is associative, you can
At the end of this class, we looked a very inspiring real time WebGL demo with caustics and physics by Evan Wallace.
Your assignment, which is due before class on Thursday Oct 30, is going to be very easy, in consideration of the fact that you've just gone through midterms in all your other classes. Make a first stab at implementing a matrix class. Within this class you should implement identity(), translate(a,b,c), rotateX(a), rotateY(a), rotateZ(a) and scale(a,b,c), and satisfy yourself that those functions all produce correct output.
Doing that much will prepare you properly for what we will be covering next in class.
The HTML5 Canvas object provides a very handy way to do 2D graphics. We are going to be using it in the next week or two as a way for you to test out your Matrix routines.
Here is the official online reference to the HTML5 Canvas object. Feel free to explore any of its functions and capabilities, (eg: setting lineWidth), in addition to the ones in the example I showed in class.
Here is the example we did in class.
We then went over matrix multiplication. To multiply two 4×4 matrices A and B, you can think of A as a vertical stack of 4 horizontal vectors, and B as a horizontal sequence of 4 vertical vectors:
A_{0,0} A_{1,0} A_{2,0} A_{3,0} A_{0,1} A_{1,1} A_{2,1} A_{3,1} A_{0,2} A_{1,2} A_{2,2} A_{3,2} A_{0,3} A_{1,3} A_{2,3} A_{3,3} ×
B_{0,0} B_{1,0} B_{2,0} B_{3,0} B_{0,1} B_{1,1} B_{2,1} B_{3,1} B_{0,2} B_{1,2} B_{2,2} B_{3,2} B_{0,3} B_{1,3} B_{2,3} B_{3,3}
The result C = A×B is given by taking the dot product of every combination of the rows of A and the columns of B. There are 4*4 = 16 such combinations, corresponding to the 4 rows and 4 columns of the result matrix C.
At the end of the class, we first saw the a selection of scenes from Minority Report, which shows a "vision of the future" from 2002.
Then we saw the desk of the future scene from TRON, which shows an analogous vision twenty years earlier.
Your assignment, due by class on Thursday November 6, is to just have fun with making cool animations using the Canvas object. Go crazy with it, make creatures and houses, science fiction landscapes, words and poetry, pretty much anything you want. The key is to explore and try things out.
Our goal for the following week will be to start using the Canvas element as a way of looking at the results of Matrix transformations and to start to experiment with building shapes out of triangles, and then things will get more serious.
So take this opportunity to just play around and have fun while you can! :)
In class we created an animation of two walking legs.
In the version that you can download, I've replaced
the matrix library matrix4x4.js
with a stub in
which the methods translate, rotateX, etc., don't do anything.
If you substitute in the fully functional version that you implemented,
you should see the walking legs show up, just like in class.
Your assignment, due by class on Thursday November 13, is to use your fully function implementation of matrices in place of the nonfunctioning one that is in the folder now.
One note: implementing translate, rotate and scale requires you to multiply two matrices. There are two possible orders for this matrix multiply. Matrix multiplication is not commutative. So in general, the following two matrix operations produce different results:
A ← A × BA ← B × A
One ordering will produce sensible results, with progressive transformations going from global to local, as we saw in class.
But if you multiply them in the other order, you won't get sensible results. You'll know if you got the order wrong, because you won't see a pair of walking legs on your web page.
Feel free to try it both ways, to see which argument order for matrix multiply works properly.
In class we explored different ways of generating 3D shapes. Here is the code we ended up with by the end of class.
In the final version of that code, we showed how to create a parametric surface over the two parameters u and v, where 0 ≤ u ≤ 1 and 0 ≤ v ≤ 1.
In particular, we used this technique to create a longitude/latitude globe shape. But we could have used the same technique to create any parametric surface that can be described by two parameters.
In the next class we will continue exploring this technique, and see how it can be used to generate different sorts of shapes.
Between now and Thursday Nov 13, look over that example and familiarize yourself with it.
To get it fully functional, you are going to need to
replace the stub matrix library matrix4x4.js
with
a fully functional one, just as you are already doing
for the previous inclass example.
We did further experiments in class of how to create 3D parametric shapes and draw the to a canvas. First we broke the algorithm into two parts: (1) creating a mesh; (2) rendering the mesh.
Then we refined our globe example, and also created a torus.
Finally, we added some timevarying procedural displacement texture. The end result is here.
We spent more time exploring how to make 3D parametric shapes, including superquadrics, and how to make a cylinder as a single parametric surface. We also looked some more at procedural displacement textures.
We also looked at how you might do procedural displacement texturing in a vertex shader, as a lookahead to what comes next, and saw that we would also need to deal with adjusting the surface normal, by taking the discrete derivative of the function used to displace the surface.
In class we developed two examples, canvas5.zip and canvas6.zip.
Your assignment, due by class on Thursday November 20, is to put together the previous two assignments to create an interesting animated scene with fun shapes.
For example, you might make a house, or a tree, or a person, or a dog or a car. Try to think of something that tells a little story (eg: the sun rises in the morning and the people wake up).
Scaled globes and cylinders are very good for making limbs of people and animals and trees.
In class we showed how a parametric cylinder can also be be defined as a superquadric. We also added perspective, changing x,y,z via the following perspective linear transformation:
z' ← fl / (fl  z) or, equivalently: 1 / (1  z/fl)
x' ← x * z'
y' ← y * z'
where fl is the "focal length" of our virtual  the distance from the origin of the camera along the positive z axis.
We also showed several other ways to create a sphere. First we used six meshes to form a cube shape, and then "inflated" the vertices of the meshes to form a sphere shape.
Then we used a subdivision technique. We started with eight equilatoral triangles, one for each octant, and subdivided each triangle to add more vertices. This shape was then inflated, so that rather than forming an octahedron it formed a sphere.
The result is in canvas8.zip
You can only get so far using arrays for vertices. Eventually you want to make a Vertex be a smart object, with its own access methods and different kinds of data fields.
Using the subdivided sphere as an example, we created a Vertex object type.
We showed in class that the perspective operations
z' ← 1 / (1  z/fl)
x' ← x * z'
y' ← y * z'
are actually just the following linear transformation, followed by a projection from (x,y,z,w) down to (x/w,y/z,z/w):
1 0 0 0 0 1 0 0 0 0 0 1 0 0 1/fl 1
We also discussed, at a high level, the fact that when the z value of a vertex gets very near to fl, and eventually moves to behind the camera plane z=fl, the projected vertex position can blow up, which is not useful. In order to avoid this, modern GPUs contain triangle zclipping logic, which clips triangles to just in front of the camera plane. This clipping can result in a triangle turning into a quadrangle. In this case, the resulting shape may be sent through the GPU as two separate triangles.
We also talked about how you can describe any geometric shape as a list of vertices and a corresponding list of faces. Each vertex contains (x,y,z) location plus some extra information that we may need, such as surface normal at that vertex.
Each face is a triangle, which is stored as an array of indices, where each index is just the index of some vertex in the vertices array. When viewed from the outside, the vertices of a face should form a counterclockwise loop.
One thing that's tricky about all this is that we need to distinguish between a vertex on a curved surface, where surface normal varies continuously, and the vertices across an edge, when there is a discontinuity of surface normals.
The way we do this is by using a single vertex for a curved surface, which is shared between adjacent faces, but using different vertices across an edge.
So, for example, as we go around the curve of a cylinder, we can share vertices across successive faces. But across the edge between the tube of the cylinder and the top or bottom of the cylinder, we should use different vertices.
A cube is a very simple complete example of a shape that can described by vertices and faces. Because a cube has edges separating its six faces, we don't share vertices across those six faces. Instead, each face has its own distinct vertices. So a cube should have 24 vertices: three vertices at each of its eight corners.
var vertices = [ [1,1,1], [ 1,1,1], [1, 1,1], [ 1, 1,1], [1,1, 1], [ 1,1, 1], [1, 1, 1], [ 1, 1, 1], [1,1,1], [ 1,1,1], [1, 1,1], [ 1, 1,1], [1,1, 1], [ 1,1, 1], [1, 1, 1], [ 1, 1, 1], [1,1,1], [ 1,1,1], [1, 1,1], [ 1, 1,1], [1,1, 1], [ 1,1, 1], [1, 1, 1], [ 1, 1, 1], ];Geometrically, the above vertices are arranged as follows:
23 / / 67       01 / / 45If we were storing four sided faces, we could then describe the six sides as follows:
var faces = [ [ 0, 4, 6, 2], // negative x face [ 1, 3, 7, 5], // positive x face [ 8, 9, 13, 12], // negative y face [10, 14, 15, 11], // positive y face [16, 18, 19, 17], // negative z face [20, 21, 23, 22], // positive z face
But to make things easier to send to the GPU, we make all faces triangles. So each face of a cube would actually be stored as two triangles:
var faces = [ [ 0, 4, 6 ], [ 6, 2, 0], // [ 0, 4, 6, 2] [ 1, 3, 7 ], [ 7, 5, 1], // [ 1, 3, 7, 5] [ 8, 9, 13 ], [ 13, 12, 8], // [ 8, 9, 13, 12] [ 10, 14, 15 ], [ 15, 11, 10], // [10, 14, 15, 11] [ 16, 18, 19 ], [ 19, 17, 16], // [16, 18, 19, 17] [ 20, 21, 23 ], [ 23, 22, 20], // [20, 21, 23, 22] ];
For Thursday, December 4, your assignment is to figure out how to describe various 3D shapes as a list of vertices and a list of triangular faces.
Shapes you should do this for are: cylinder, sphere, cube (which I already showed you how to do, above), octahedron, torus.
You can try other shapes as well if you are feeling ambitious. For example, can you make a shape that looks like a house? An animal? A tree? 3D letters?
Remember, your triangles all need to be oriented counterclockwise, when viewed from the outside of the shape.
Try making an interesting animated scene using your vertices/faces shapes.
We created a simple humanoind jointed stick figure that is animated entirely by length constraints and simple forces.
We also looked at
this example of a procedurally animated walking character.
To run it, you need to add trusted site http://mrl.nyu.edu
to your Java preferences.
On a Mac, you can do that as follows:
Click on the Apple menu and open the System Preferences panel; Click on Java; Click on the Security tab; Click on Edit Site List; Add http://mrl.nyu.edu
to the list
In class we went through the code to create faces for a parametric mesh.
Then we went through an example of low level code for sending vertices down to WebGL. I will upload that code to this page soon.
Then we showed how to do the same thing using the high level three.js library, which does most of the work for you.y You can find three.js in the chalktalk library that I provided for you earlier this semester.
I need to hand in the grades for this class by Wednesday December 24 before noon, so think in terms of a final project that you can complete by noon of Tuesday December 23 (since I'll need time to grade everyone). It's ok for two people to do a final project together, but remember that such a project will need to be more ambitious in scope.
Your assignment for Thursday, Dec 11 is to finish up anything you may still have unfinished from the assignments to date This is also a good time to make any improvements or extra enhancements that you were meaning to make, but didn't get around to.
In this class students discussed their final project ideas, and then we had a wide ranging and general discussion about various advanced topics.
In this lecture we went over the math for transforming second order surfaces for purposes of ray tracing. Here is a review of what we did, which also includes a little extra section at the end that shows you how to transform the surface normal (so you can do lighting and shading on your transformed surface).
Also, as requested in class, here is an excellent textbook for those who are interested in more advanced reading on the subject of computer graphics:
Computer Graphics, Principles and Practice, third edition
Dec 16: Extra class begins at 10am
CLASS ON TUESDAY DECEMBER 16 BEGINS AT 10AM!