2025 April 24,

CS 311: Software Triangle Rasterization

The first part of CS 311 develops, pretty much from scratch, a 3D graphics engine based on the triangle rasterization algorithm. Each day, we learn a new concept or algorithm in class, which you then implement in homework.

Except where otherwise noted, each homework assignment is due at the start of the next class meeting. Remember our late policy: If you don't have an assignment finished on time, then don't panic, but do e-mail me about it. Then we can formulate a plan for getting back on track. :)

Day 01

All of your work today is individual. I recommend that you do it on a macOS computer in Olin 304, where all of the necessary software is installed. You read the syllabus, review some C, and write a small painting program as an exercise. Once that's all finished, you can install the necessary software on your own computer, if you like.

Syllabus

Read the syllabus, by which I mean the front page of this course web site.

Optional: C Review

I assume that you already know the C programming language from CS 208 (and perhaps other work). Nevertheless, I provide tutorials for students who want to review. Here's the C, that we need to know right now.

000mainHelloWorld.c: Study this file. By "study" I mean: Download the file, read it in a text editor, follow the instructions for compiling and running it, and figure out what each line of code does. If you want to test your understanding, then try altering a line, predicting what it will do, running the code, and checking your prediction.

010mainIntsDoubles.c: Study this file.

020mainIfWhileFor.c: Study this file.

030mainFunctions.c: Study this file.

First Graphics

Even if you already know C, you need to do this three-file tutorial, because it introduces the minimalist graphics library that we use for most of the course.

040pixel.h: Skim this file. By "skim" I mean: Read through it quickly. You should be familiar with what functions are in there, but you are not expected to memorize or understand the details.

040pixel.c: Compile this file into 040pixel.o by following the instructions at the top. You are not expected to study or even skim this file.

040mainLinking.c: Study this file.

Exercise

Here is the work that you're going to hand in.

050mainPainting.c: Make a copy of 040mainLinking.c called "050mainPainting.c". Edit this file to make a simple painting program. Your program should make strokes of color whenever the user drags the mouse across the window. (Dragging means moving the mouse while holding down the left mouse button.) Your program should also let the user change the paint color by pressing the R, G, B, C, M, Y, K, and W (that's red, green, blue, cyan, magenta, yellow, black, and white) keys. To pass information among different parts of your user interface, you may have to use a few global variables. For example, here are some of mine, which I declare after the #include statements and before my first function definition:

double red = 1.0, green = 1.0, blue = 1.0;
int mouseIsDown = 0;

Just for fun, my program has a couple of extra features: letting the user change the brush size and clear the window. Here's a screenshot (click to enlarge):

050mainPainting screenshot

Once your program is tested, make sure that your code is clean and commented. Follow these instructions to access the COURSES file server. Put your 050mainPainting.c file in your CS 311 hand-in folder on COURSES. Warning: Once you submit your file, you may no longer be able to edit it.

Optional: Software Installation

You are welcome to do all of CS 311 on the lab machines in Olin 304, but it might be more convenient for you to work on your own computer. In most cases, installing the necessary software is not difficult. Follow the installation instructions if you like.

Optional: Work Ahead

Some of the work for Day 02 is already posted below. Whenever I post work ahead of time, it means that I have finalized that work, and you are welcome to get started on it, if you already know everything that you need to know. In this case, you can work on C Review and Vectors but not Rasterizer.

Day 02

All of your work today is individual. We start implementing our triangle-rasterizing graphics engine. So your first programming task is to make the rasterizer. It is important that you carefully write and thoroughly test this code, because the next four weeks of the course depend on it.

Optional: C Review

060mainArrays.c: Study this file.

Rasterizer

070triangle.c: Make this file. In it, write the following function to rasterize a triangle. The function assumes that a window has already been made. For each pixel in the triangle, the function calls pixSetRGB with the given RGB color. You probably want to use some helper functions. My implementation uses a helper function triRenderWithALeft, as we discussed in class. There should not be any #include statements in this file; headers and implementations are all included in order in main.c below.

/* The inputs are three vertices of a 2D triangle and an RGB color. The vertices 
are assumed not to be co-linear. They are assumed to be in counter-clockwise 
order. This function renders the triangle into a pre-existing window in the 
given color. */
void triRender(
        const double a[2], const double b[2], const double c[2], 
        const double rgb[3]) {
    ...
}

070mainRasterizing.c: Make this file by copying 040mainLinking.c. Near the top, but after all header files, #include 070triangle.c to include your rasterizer. Feel free to delete the callbacks. In the main function, write code to demonstrate that your rasterizer works. Test thoroughly. Here's a screenshot of mine:

070mainRasterizing screenshot

Make sure that 070triangle.c and 070mainRasterizing.c are functioning, clean, and commented. Put them in your CS 311 hand-in folder on COURSES.

Vectors

In this section, there is no work to hand in. So how do I know whether you've done it? I don't. But if you don't do it today, then you understand the Day 03 class meeting less well, and you have to catch up later anyway. So do the work now.

080vectors.pdf: Study this tutorial. Answer the comprehension question buried in there.

080vector.c: Here are some basic vector functions. We add more in the coming weeks. In previous editions of CS 311, I have asked students to write these and other mathematical functions. This term, in an attempt to lessen the work load, I am simply giving these functions to you. Don't make me regret it. For each function, you should (A) have the interface/header memorized, and (B) be able to produce an implementation if asked.

Optional: Work Ahead

This is the last time I'll say it, but feel free to work ahead, on any tasks that you are ready to do. In particular, doing the Preliminaries section of Day 03 might improve your understanding of the Day 03 class meeting.

Day 03

All of your work today is individual. We add the ability to interpolate quantities across triangles. After rasterization, interpolation is the second-most fundamental concept in our graphics engine. It enables a wide variety of effects — far beyond what we see today.

Preliminaries

090matrices2By2.pdf: Study this file. Solve the comprehension problem buried in there.

090mainMatrices2By2.c: Study this file. Even if you've done a lot of C, it's possible that you've never used matrices in C.

100matrix.c: Here are some basic matrix functions. We add more in the coming weeks. In previous editions of CS 311, I have asked students to write this code. This term, I am giving the code to you. For each function, you should have the interface memorized, and you should be able to produce an implementation if asked.

Interpolator

100triangle.c: In a copy of 070triangle.c, change the interface for triRender to

/* The inputs are three vertices of a 2D triangle, a uniform RGB color, and 
three more RGB colors associated to the vertices, in order. The vertices are 
assumed not to be co-linear. They are assumed to be in counter-clockwise order. 
This function renders the triangle into a pre-existing window, in colors 
interpolated from the vertices modulated by the uniform color. */
void triRender(
        const double a[2], const double b[2], const double c[2], 
        const double rgb[3], const double alpha[3], const double beta[3], 
        const double gamma[3])

Implement linear interpolation of color, as we discussed in class. At each rasterized pixel, the final color is the interpolated color modulated by the uniform color rgb. Use functions from 080vector.c and 100matrix.c wherever they are appropriate. For example, at each rasterized pixel I use vecSubtract, mat2Transform, vecScale, vecAdd, and vecModulate. (But don't include those files here. They are included in 100mainInterpolating.c.)

100mainInterpolating.c: In a copy of 070mainRasterizing.c, include 080vector.c, 100matrix.c, and 100triangle.c (instead of 070triangle.c). Demonstrate that your color interpolation works. Here's a screenshot of mine, which uses red, green, and blue vertices and a yellow rgb:

100mainInterpolating screenshot

You have two files to hand in: 100triangle.c and 100mainInterpolating.c. As usual, check that your files are functioning, clean, and commented. Then submit them to COURSES.

Day 04

Today you start to work with your assigned partner (if any). When you need to use old files, you can use whichever partner's old files you prefer. We implement texture mapping, which is one of the most important graphical effects enabled by interpolation.

Optional: C Review

110mainPointers.c: Study this file.

120mainStructs.c: Study this file. Even if you've done a lot of C, you might not know the best way to define struct data types. Pay attention to DoubleNode, for example.

Infrastructure

stb_image.h: Download this file. You are not expected to read or even skim it. Parts of it are hard to understand unless you know C well. This is a public-domain image-loading library. Despite the ".h" suffix, it contains not just interfaces but also implementations. This library is used by a certain function in 130texture.c, which I have already written for you.

130texture.c: Skim this file. You should be familiar with all of the constants, structs, and functions, but you may not understand the details of some of the function implementations. This file defines a data type, texTexture, that we use for texture mapping. It's all complete, except that the texSample function has a cosmetic bug: It responds with nearest-neighbor filtering, no matter what kind of filtering you request. You fix that bug later in this assignment. For now, just pretend that there is no bug.

Nearest-Neighbor Filtering

130triangle.c: In a copy of 100triangle.c, update triRender to have the following interface. Instead of interpolating colors, you are interpolating texture coordinates and then passing them to texSample.

/* The inputs are three vertices of a 2D triangle, a uniform RGB color, a 
pre-configured texture object, and texture coordinates associated to the 
vertices, in order. The vertices are assumed not to be co-linear. They are 
assumed to be in counter-clockwise order. This function renders the triangle 
into a pre-existing window, in colors sampled from the texture modulated by the 
uniform color. */
void triRender(
        const double a[2], const double b[2], const double c[2], 
        const double rgb[3], const texTexture *tex, const double alpha[2], 
        const double beta[2], const double gamma[2])

130mainTexturing.c: In a copy of 100mainInterpolating.c, include 130texture.c and 130triangle.c. Demonstrate your new triRender. Somewhere after pixInitialize you need to make a texTexture to pass to triRender. Use texInitializeFile, and don't forget to texFinalize before you pixFinalize. (Resources should be dismantled in last-in-first-out order.) The vertices should have texture coordinates ST instead of colors RGB. If your results look wrong, and you don't know why, then try a different image; the STB library doesn't handle some images well. Here's a screenshot of my program, which uses a colorized photograph of one of my favorite mathematicians:

130mainTexturing screenshot

Linear Filtering

140texture.c: In a copy of 130texture.c, find the part of texSample where linear filtering is supposed to happen, and replace it with your own linear filtering code. Multiple calls to texGetTexel are required. Do not assume that the texels are RGB colors. Instead assume that they are arrays of dimension tex->texelDim. It's manageable if you use vecScale, vecAdd, etc.

140mainTexturing.c: In a copy of 130mainTexturing.c, include 140texture.c. Add a global variable and a keyboard handler, so that by pressing GLFW_KEY_ENTER the user can quickly switch between nearest-neighbor and linear filtering. It will help you evaluate your filtering. If you cannot see much of a difference, then try using a coarser image — maybe just eight pixels wide and eight pixels tall?

You have five files to hand in: 130triangle.c, 130mainTexturing.c, 140texture.c, 140mainTexturing.c, and the image file that you used (in case that helps the grader). Make sure that your code files are functional, clean, and commented. Make sure that both partners are credited in a comment at the top of each file. Make sure that both partners have a copy of each file. Then, just one partner submits the files to COURSES.

Day 05

Today you continue working with your partner. We introduce the fragment shader. This abstraction makes our graphics engine more flexible and better aligned with how graphics is done in the real world. You also get a chance to design your own creative effect.

Fragment Shader

150mainAbstracted.c: Study this file, but you won't be able to run it yet, for several reasons. First, you haven't written the shadeFragment function yet. Second, you don't have 160triangle.c. Third, you don't have 160shading.c. Fourth, your texture image file might not have the same name as mine. In the following instructions, we fix these issues in reverse order.

160mainAbstracted.c: In a copy of 150mainAbstracted, change the texture image file name.

160shading.c: Create this file. In it, define a struct data type, following 120mainStructs.c, called shaShading. Right now it just has three int members: unifDim, attrDim, and texNum, in that order. (Later in the course it holds more information.)

160triangle.c: Make a copy of 130triangle.c. In it, update the interface for triRender to the one below. You now have an array of uniform information, an array of texture pointers, three arrays of vertex attributes, and a shaShading struct that specifies how many elements are in each of those arrays. Edit your implementation so that, at each rasterized pixel, it computes an interpolated attribute array attr of dimension sha->attrDim, invokes shadeFragment to get an RGB color, and passes this color to pixSetRGB. (Let me emphasize that pixSetRGB is called from triRender, not from shadeFragment. Also, keep in mind that sha->attrDim might be any integer greater than or equal to 2. Your code should work whether sha->attrDim is 2, 25, or 100, for example.)

/* Assumes that the 0th and 1th elements of a, b, c are the 'x' and 'y' 
coordinates of the vertices, respectively (used in rasterization, and to 
interpolate the other elements of a, b, c). */
void triRender(
        const shaShading *sha, const double unif[], const texTexture *tex[], 
        const double a[], const double b[], const double c[])

160mainAbstracted.c: Now write shadeFragment. It should modulate a sampled color, interpolated color, and uniform color all together. Run your program. Here's mine:

160mainAbstracted screenshot

Demo

170mainEffect.c: Make a copy of 160mainAbstracted.c. Edit it to produce an entirely different visual effect. Be creative! My only requirement is that you must use at least two textures. The most interesting edits are in shadeFragment, but there may be some supporting edits throughout 170mainEffect.c. The rest of your files, such as 160triangle.c, should be unchanged. If you want me to share your demo with the class, then e-mail me a screenshot and a brief (one or two sentences) explanation of how you made your effect. For example, mine is below. See class for an explanation of how it works.

170mainEffect screenshot

Make sure that 160shading.c, 160triangle.c, 160mainAbstracted.c, and 170mainEffect.c are functioning, clean, and commented. Make sure that both partners are credited at the top of each file. Share the files with your partner. Submit one copy of each file to COURSES. Also submit your texture files.

Day 06

As you continue working with your partner, we start to draw meshes made of many 2D triangles. Also we introduce the vertex shader abstraction.

Mesh Renderer

180mesh.c: Skim. This file implements a data type for triangular meshes. You can see initializer/finalizer functions, accessor functions, and functions to read and write files. There's also meshRender. Fill in its implementation, using other functions such as meshGetTrianglePointer, meshGetVertexPointer, and triRender. Don't overlook the part of the specification about attrDim.

180mesh2D.c: Skim. This file offers "convenience" functions for building three common kinds of 2D meshes.

180mainMesh.c: Make this file, perhaps starting from a copy of 160mainAbstracted.c. Demonstrate that your mesh renderer is working correctly. I recommend using the convenience builders and one texture. Don't forget to finalize everything that you successfully initialize.

Vertex Shader

Now we make a big change to our graphics engine. It requires alterations to several files (because our engine is not very well abstracted, because we're learning how to abstract it right now). The change is: What we used to call attributes are now either attributes or varyings, depending on whether they exist before or after the vertex shader.

190shading.c: In a copy of 160shading.c, add an int varyDim member to the shading struct.

190triangle.c: Make a copy of 160triangle.c. Your triangle code will never again see an attribute vector. It will see varying vectors instead. Make all required changes. For example, replace sha->attrDim with sha->varyDim. When I did this, my code had a bunch of variable names and comments that referred to attributes, so I took this opportunity to clean those up too.

190mainAbstracted.c: Make a copy of 180mainMesh.c. Because shadeFragment is called only by triRender, and triRender sees varyings not attributes, it follows that shadeFragment sees varyings not attributes. So, in shadeFragment, replace attr with vary, ATTRX with VARYX, etc. Just below shadeFragment, add the shadeVertex function below. Above shadeFragment, keep the constants ATTRX, etc., but add constants VARYX, etc. And don't forget to initialize the varyDim member of your shading struct in main.

/* Outputs vary, based on the other parameters, which are unaltered. Like 
shadeFragment, this function should not access any variables other than its 
parameters and any local variables that it declares. */
void shadeVertex(
        int unifDim, const double unif[], int attrDim, const double attr[], 
        int varyDim, double vary[]) {
    /* For now, just copy attr to vary. Baby steps. :) */
    vary[VARYX] = attr[ATTRX];
    vary[VARYY] = attr[ATTRY];
    vary[VARYS] = attr[ATTRS];
    vary[VARYT] = attr[ATTRT];
}

190mesh.c: In a copy of 180mesh.c, in meshRender, use shadeVertex to transform attributes to varyings before passing them to triRender. (I can imagine two ways to do this. One way uses extra time but no extra memory, and the other way attempts to save time by using extra memory. I recommend that you pursue the first way, because it is simpler. If you don't know what I'm talking about, then do whatever makes sense to you.)

190mainAbstracted.c: Test whether everything works up to this point. Then add one uniform to express rotation and two uniforms to express translation. Implement the rotation followed by translation in shadeVertex. I recommend that you have a time step handler that animates the rotation and/or translation, because animations give you more debugging information than still images do. Here's a still image of my program:

190mainAbstracted screenshot

As always, make sure that your files — 180mesh.c, 180mainMesh.c, 190shading.c, 190triangle.c, 190mesh.c, and 190mainAbstracted.c — are functioning, clean, commented, and credited. Make sure both partners have a copy, and submit one copy to COURSES.

Optional: Study Question

Describe the two ways of implementing meshRender, that were mentioned above.

Day 07

Today's work with your partner doesn't really add any new features. Instead, it organizes, accelerates, and stress-tests the code that we already have. That's valuable work, because we're about to build a 3D graphics engine atop this 2D foundation, and we want the foundation to be as strong as possible.

Shader Program

Together, the vertex and fragment shaders constitute a single shader program. A shaShading struct should describe a complete shader program, including the functions. For then we can conveniently switch shader programs at run time. (A real graphics program might use several shader programs in each animation frame.)

200mainFunctionPointers.c: Study.

210shading.c: In a copy of 190shading.c, add shadeVertex and shadeFragment members to the end of the shading struct. They are function pointers.

210triangle.c: In a copy of 190triangle.c, replace shadeFragment with sha->shadeFragment. Easy. :)

210mesh.c: In a copy of 190mesh.c, replace shadeVertex with sha->shadeVertex.

210mainAbstracted.c: Study this file, focusing on the two comments that describe new stuff. Change the name of the texture image file but nothing else in this C file. Test that it works with the rest of your code. If not, then fix the rest of your code, not this file.

Optimization

The next issue is that our approach to writing shadeVertex is inefficient (why?), and it will become more inefficient as the course progresses. So let's take this opportunity to restructure the code a bit.

220matrices3By3.pdf: Study.

220matrix.c: Study. There are three new functions for you to know.

220mainHomogeneous.c: Make a copy of 210mainAbstracted.c. Replace the three rotation-translation uniforms with nine uniforms. They store a 3x3 matrix. In your time step handler, configure this matrix using code like the code below. The rotation angle and translation vector are global variables, one or both of which is altered by the time step handler. UNIFMODELING is a compile-time constant storing the index of the first (0th) entry of the matrix in the uniforms. You are not expected to fully understand the pointer manipulation in the third line of code. Intuitively, it packs a 3x3 matrix into a 9-dimensional vector.

double isom[3][3];
mat3Isometry(rotationAngle, translationVector, isom);
vecCopy(9, (double *)isom, &unif[UNIFMODELING]);

220mainHomogeneous.c: In shadeVertex, you can "unpack" the 9-dimensional vector back into a 3x3 matrix and use it as follows. Again, you are not expected to understand this pointer manipulation. But you do need to figure out what attrHomog and varyHomog are, and how they fit into the rest of your code. Once you're done, test your program. Its output should be identical to that of 210mainAbstracted.c.

mat3Transform((double(*)[3])(&unif[UNIFMODELING]), attrHomog, varyHomog);

Stress Test

Let's make a serious test of everything up to this point, so that we can be confident in our upcoming 3D work.

230mainShadings.c: Make a demo program that renders (at least) two meshes with (at least) two different shading structs. To make the shading structs genuinely different, let's say that they should have different shader functions attached. They may also differ in their integer members. Be creative! If you want me to share your demo with the class, then e-mail me a screenshot and a short description. Screenshots of two examples are below. In the first example, the body of the spaceship is a square texture-mapped with one texture. When the user is pressing a certain key, the spaceship accelerates forward, and an exhaust plume appears. The exhaust plume is a triangle with no texture. The second example is a Sokoban clone with a psychedelic animated background. (These are prototypes of video games. Your program does not have to be interactive at all.)

230mainShadings screenshot 231mainSokoban screenshot

Clean up and hand in 210shading.c, 210triangle.c, 210mesh.c, 210mainAbstracted.c, 220mainHomogeneous.c, and 230mainShadings.c. Please include any texture files that are needed to run your software.

Day 08

Today we transition from 2D to 3D. This change requires some math, which you must study, but which I have mostly implemented for you. This change requires 3D meshes, which I have implemented for you. It requires numerous other small changes, as we discuss in class.

Infrastructure

240vectorsMore.pdf: Study.

240vector.c: Study. There are several new functions, that you must know.

240matrices3By3More.pdf: Study.

240matrices4By4.pdf: Study.

240matrix.c: Study. There are several new functions, that you must know. It is not essential that you know the helper functions.

240mesh3D.c: Skim. This file contains "convenience" functions for building 3D meshes. The attribute dimensions are 3 + 2 + 3 = 8. The first three attributes are X, Y, Z position. The next two attributes are S, T texture coordinates. The last three attributes are called N, O, P. We talk about them later in the course.

Backface Culling

250triangle.c: Now that we're working in 3D, it is possible that triRender will be passed a co-linear or clockwise-ordered triangle. In those cases, triRender's behavior used to be unspecified. Now we specify that it should draw nothing. My code has always (since 100triangle.c) done this backface culling automatically, by checking that the determinant returned by mat2Invert is positive. If you've been doing the same, then 250triangle.c is just a copy of 210triangle.c. If you've not been doing the same, then you need to implement this feature now. The details depend on how your triRender code is structured. If, like me, you always keep the vertices counter-clockwise, then do the determinant test at the start and be done. But other students reorder the vertices into clockwise order before rasterizing. They might consider a cross product test before reordering, instead of my determinant test after reordering. If you don't know what I'm talking about, then re-read 240vectorsMore.pdf.

250main3D.c: Alter the name of the texture image file, but make no other alterations. Study. The program should display a rotating 3D box as in the screenshot below. If not, then fix your other files. (I made this file by starting from 220mainHomogeneous.c and making many small edits as described in class.)

250main3D screenshot

Clean up and hand in 250triangle.c and 250main3D.c.

Day 9

Today you and your partner add depth buffering to the graphics engine. Unfortunately, this feature requires edits to many files. Roughly speaking, my instructions work from the deepest part of the algorithm (the shaders) up to the shallowest part (the main function).

Depth Buffer

260depth.c: Skim.

260mainDepth.c: Start with a copy of 250main3D.c. Enlarge your varyings so that they convey Z immediately after X and Y. Update varyDim and shadeVertex accordingly. Update shadeFragment to have the following interface. The function outputs not just an RGB color but also a depth D. At this point in the course, the depth should be the negation of the varying Z coordinate.

void shadeFragment(
        int unifDim, const double unif[], int texNum, const texTexture *tex[], 
        int varyDim, const double vary[], double rgbd[4])

260shading.c: In a copy of 210shading.c, update the shadeFragment member's type to match that function's new interface.

260triangle.c: Here's the interesting part. Make a copy of 250triangle.c. The new interface for triRender is shown below. Deep in its algorithm, it receives an RGBD value from shadeFragment. Use it.

void triRender(
        const shaShading *sha, depthBuffer *buf, const double unif[], 
        const texTexture *tex[], const double a[], const double b[], 
        const double c[])

260mesh.c: Make a copy of 210mesh.c. Because triRender is called from meshRender, and triRender needs a depth buffer, we have to update meshRender as follows.

void meshRender(
        const meshMesh *mesh, depthBuffer *buf, const shaShading *sha, 
        const double unif[], const texTexture *tex[])

260mainDepth.c: In the appropriate places, declare, initialize, and finalize a depth buffer. On each animation frame, around the time that you clear the RGB buffer, also clear the depth buffer. Use the new meshRender. Also make various other required changes. I recommend that you inspect every line of code in this file. Test your program. The easiest way to make a strong test is to use two meshes that intersect. Here's mine:

260mainDepth screenshot

Optimization

The point of depth buffering is to get objects to occlude correctly. The point is not to improve the code's speed. However, we get a speed improvement as a bonus. To understand how, consider: What happens if a triangle crosses the boundary of the window, so that some of its pixels are outside the window? Well, pixSetRGB ignores requests to paint those pixels, so your image is unharmed. But your rasterizer still wastes time trying to paint them. For the first time in the course, triRender knows the dimensions of the window, so you can avoid this wasted time. As we get more into 3D effects, the time saved is worth the work of implementing this optimization.

270triangle.c: In a copy of 260triangle.c, implement the optimization. (By the way, this is the final version of triangle.c. Hooray!)

270mainDepth.c: In a copy of 260mainDepth.c, include the new version of triangle.c. Don't change anything else. Test. You should get exactly the same output as you did before. To appreciate the optimization, try using a mesh that transgresses the window boundaries. (In my test, the optimization quintupled my frame rate. Your results may vary.)

Clean up and hand in 260shading.c, 260triangle.c, 260mesh.c, 260mainDepth.c, 270triangle.c, and 270mainDepth.c.

Day 10

Today you and your partner implement the viewport transformation and both orthographic and perspective projections. Then you make some demonstrations, to solidify your understanding of what the viewport and projections mean.

Infrastructure

In this block of work, you're mostly transcribing math into code.

280matrices4By4More.pdf: Study.

280matrix.c: Study. A couple of new functions are implemented. There's also one new function for you to implement.

280camera.c: Study. This file defines a camera "class", much like the texture and mesh classes that we already have. Implement the two un-implemented methods using the data stored in the projection member.

280mesh.c: In a copy of 260mesh.c, change meshRender to have the following interface. Just before it calls triRender, it must perform homogeneous division and the viewport transformation on the XYZW part of the varyings. You may assume that the vertex shader has put XYZW into the first four entries of its output vary. You may assume that W is non-zero. (At this point in the course, W might be zero in rare cases. And then your program might malfunction. That's okay for now.)

void meshRender(
        const meshMesh *mesh, depthBuffer *buf, const double viewport[4][4], 
        const shaShading *sha, const double unif[], const texTexture *tex[])

Demonstrations

Remember that the camera is at the origin of the world and pointing down the world's negative Z-axis. From the camera's point of view, the world X-axis points right and the world Y-axis points up.

280mainViewProj.c: Start with a copy of 270mainDepth.c. Your goal is to make a new program that produces exactly the same imagery, but using the viewport and projection. So make a viewport transformation based on the window size, and pass it to meshRender. Configure a camera with an orthographic projection. In the X direction, the viewing volume should run from 0 to the window width. In the Y direction, it should run from 0 to the window height. In the Z direction, its bounds don't matter much (at this point of the course). Maybe make them run from -200 to -100? Once the camera is configured, you need to get its projection matrix into the uniforms, and perform the projection in vertex shader. Also, change the vertex shader documentation to say that X, Y, Z, W are first four entries of the vary output. In the fragment shader, the depth should now be the varying Z, not its negation (because the negation happens implicitly in the projection). You might want to check every line of 280mainViewProj.c, to make sure that you haven't missed any needed changes. Test.

On paper/chalkboard/tablet/etc.: Sketch a 3D scene, as follows. Draw X-, Y-, and Z-axes for the world. Sketch the camera into the scene, with a perspective frustum attached. Sketch a couple of scene objects, such as spheres and boxes. They should be about 10 meters (or whatever your length units are) in front of the camera and contained in the perspective frustum. So what are the XYZ coordinates of their centers? How big are the objects? For example, if one object is a sphere, then what is its radius? Also, try to imagine what the objects look like, from the camera's point of view. (You might be tempted to skip this part of the assignment. Skipping it may greatly increase your work in the next step. Don't skip it!)

290mainWorld.c: Make this file, probably starting from a copy of 280mainViewProj.c. Implement the scene that you just sketched. Even if you're using the same shapes as in 280mainViewProj.c, you need to adjust the mesh initializations, because the mesh sizes are now in meters rather than pixels. I recommend that you build each mesh around its local origin. Use modeling transformations to place the objects where you want them. Using camSetFrustum, let the user switch back and forth between perspective and orthographic by pressing the P key. Here's mine, which shows a tent erected under a tree (in perspective):

290mainWorld screenshot

Clean up and hand in 280matrix.c, 280camera.c, 280mesh.c, 280mainViewProj.c, and 290mainWorld.c.

Day 11

Today you and your partner have two features to implement: a movable camera and perspective-corrected interpolation.

Camera Isometry

The camera's position and orientation are controlled by an isometry (a combination of rotation and translation). The camera's effect in the vertex shader is the inverse isometry followed by the projection.

300isometry.c: Study. As the course proceeds, isometries become increasingly important to us, so they're worth turning into a class. Implement the two un-implemented methods. In at least one of them, you should use mat3TransposeTransform (and you should know why I say that).

300camera.c: Study. Pay particular attention to camLookAt and camLookFrom, which are extremely useful in controlling a camera's isometry. You are expected to understand how they work. Draw pictures to figure them out. Talk to me, if you can't. There are three un-implemented methods. Implement two of them by copying your code from 280camera.c. Implement the other method too.

300mainCamera.c: Make a copy of 290mainWorld.c. Change how the camera sends its 4x4 matrix to the vertex shader. Use camLookAt to aim the camera at the scene objects. If they are clustered around the point (0, 0, -10), and you view them from a distance of 10 with φ = 0 and θ = -π / 2, then you should get the same image as you did earlier. Test.

310mainCamera.c: In a copy of 300mainWorld.c, add a keyboard handler to let the user control the camera's isometry. The left- and right-arrow keys should change θ. The up- and down-arrow keys should change φ. Let's say that the O and L keys should change the camera's distance from its target. (Changing that distance requires calls to both camLookAt and camSetFrustum. Why?) The P key should still control the projection type. Test. Here's mine:

310mainCamera screenshot

Perspective-Corrected Interpolation

awesome.png: Download this aptly named 8x8 image.

320mainInterpolating.c: Study. When you run this program, the texture should appear bent, as in the screenshot below left, because we have not yet corrected our interpolation for perspective.

330mesh.c: Make a copy of 280mesh.c. Formerly you performed the homogeneous division on just the XYZW components of the varying vector. Now you perform homogeneous division on the entire varying vector.

330mainInterpolating.c: Start with a copy of 320mainInterpolating.c. Add another varying. Coming out of the vertex shader, it always has value 1. Then the mesh renderer, that you just made, sets it to 1 / W, right? The fragment shader receives it as an interpolated 1 / W value. The fragment shader should immediately divide all varyings after XYZW by this interpolated 1 / W value, to balance the division by W that happened in the mesh renderer. Then it uses those divided varyings to produce its RGBD answer as usual. That's it. When you run the program, the bending problem should be gone, as in the screenshot below right.

320mainInterpolating screenshot 330mainInterpolating screenshot

Clean up and hand in 300isometry.c, 300camera.c, 300mainCamera.c, 310mainCamera.c, 330mesh.c, and 330mainInterpolating.c.

Optional: Study Questions

In my 310mainCamera.c, the user is able to get a close-up look at the tent only. How could I enhance the user interface, so that the user could get close-up looks at whatever they wanted — for example, the tree?

Suppose that I have perspective-corrected interpolation working, but I want to go back and make the un-corrected image seen above left. Well, I can accomplish this feat by editing just the shaders in 330mainInterpolating.c. How?

Day 12

No new homework is assigned today, because today's class is displaced by oral exams.

Day 13

There's only one more feature for your partnership to implement: clipping at the near plane. Then our triangle-rasterizing 3D graphics engine is essentially complete. Along the way, let's make some new artwork, so that we can more strenuously test how perspective works.

Landscape

340landscape.c: Skim. This file helps us randomly generate grids of elevation data. Those grids can be fed to certain initializers in 240mesh3D.c to make landscape meshes.

340mainLandscape.c: This program should compile and perhaps even run, but it has a major bug, in that I forgot to perspective-correct the interpolation. Oops. Edit the varyings, vertex shader, and fragment shader accordingly. Then the program should work, producing imagery like that of the first two screenshots below. Try out the keyboard controls. Intuitively, the program is a virtual helicopter ride over a desert landscape. If the rendering is unbearably slow, then decrease LANDSIZE. If you decrease LANDSIZE and the landscape looks too crazy, then try decreasing the number and/or size of the faults, blurs, and bumps, that randomly generate it.

Clipping

340mainLandscape.c: I would like you to experience for yourself what the problem is, please. Fly the helicopter low over the desert, so that the camera views the landscape at a shallow angle. In this situation, switching between orthographic and perspective projections should produce a strong visual effect. The program should work perfectly when all scene objects are in front of the camera. On the other hand, undesirable artifacts (and slow frame rates, and maybe crashes) should arise when the camera is in perspective mode and objects are at or behind the camera plane. For example, the first image below is okay. I push the helicopter forward slightly to get the second image, which is still okay. I push the helicopter forward slightly again to get the third image, which is terribly wrong.

340mainLandscapeA screenshot 340mainLandscapeB screenshot 340mainLandscapeC screenshot

350mesh.c: In a copy of 330mesh.c, implement the simpler near-plane clipping algorithm discussed in class. Shall I refresh your memory? The algorithm happens in clip coordinates — that is, after projection and before homogeneous division. A vertex a is clipped if either a3 ≤ 0 or a3 < -a2. A triangle, in which one or more vertices is clipped, is simply discarded (not rendered).

350mainClipping.c: In a copy of 340mainLandscape.c, replace 330mesh.c with 350mesh.c. That's it. Test the program. There should be no artifacts or crashes when objects cross the camera plane. Instead, objects should appear raggedly clipped where they cross the near plane. Both perspective and orthographic projections should work.

360mesh.c: In a copy of 350mesh.c, implement the full near-plane clipping algorithm discussed in class. If all three vertices in a triangle are clipped, then the entire triangle is clipped out. If two vertices are clipped, then the triangle is clipped to a smaller triangle. If one vertex is clipped, then the triangle is clipped to two smaller triangles. If no vertices are clipped, then the triangle is not clipped at all. The vertices of the smaller triangles are found by linear interpolation. More specifically, suppose that vertex a is clipped and vertex b is not. Then a + t (b - a), with t = (a2 + a3) / (a2 + a3 - b2 - b3), is where the near plane intersects the triangle side from a to b. This interpolation applies to not just the XYZW coordinates but rather the entire varying vector.

360mainClipping.c: In a copy of 350mainClipping.c, replace 350mesh.c with 360mesh.c. That's it. Then stress-test the program. There should be no artifacts or crashes when objects cross the camera plane. Instead, objects should appear smoothly clipped where they cross the near plane. Both perspective and orthographic projections should work.

Clean up and hand in 340mainLandscape.c, 350mesh.c, 350mainClipping.c, 360mesh.c, and 360mainClipping.c.

Optional: Packaging the Graphics Engine

Congratulations! Our first 3D graphics engine is complete. It might not seem complete, because we have made only rudimentary images with it. However, a wide variety of features, effects, and applications can be implemented using what we have now.

Let's take this opportunity to package our graphics engine for posterity. This packaging has several benefits. First, we can release a binary version without giving away the source code. Second, we can hide the implementation of the engine from its users, so that they use only the interfaces. Third, those users do not need to re-compile the graphics engine every time they compile their programs. (In a small project like this, the compilation time is small. In larger projects, the time can be huge.) Fourth, the users need to include just one header file instead of 10 or more C files.

370engine.h: Skim. This header provides public interfaces without committing to implementation details. For example, all functions in 270triangle.c are omitted. Anyway, if you have some other functions that you want to make public, then feel free to add them.

370engine.c: Compile following the directions at the top of the file. The resulting 370engine.o is the binary version of the graphics engine.

370mainEngine.c: Start with a copy of 360mainClipping.c. Edit it to #include 370engine.h (and 040pixel.h) but none of our other engine files. At the top, change the compilation instructions, so that the user knows how to link to 370engine.o (and 040pixel.o). Test that the program works correctly.

If you ever wanted to play around with this graphics engine — try a weird effect, make better art, etc. — then now would be the time. See the two examples below. On the left is the simple RGB cube that I showed you on the first day of class. On the right is a heavily modified 370mainEngine.c. I make it by first switching the artwork to a temperate grassy climate. I add a textured sky and untextured shrubbery. To the shaders I add a simple lighting effect (which we study in a couple of weeks). I add a lake, a mirror effect on the lake, and a blur effect on the mirror. All of these features are added in main.c; no edits to the engine are required.

371mainRGB screenshot 372mainMirror screenshot

Optional: Study Questions

In the RGB cube application above, I use the standard box mesh from 240mesh3D.c, with vertex attributes XYZSTNOP. I use no textures. So how do I color the cube? (There are several ways to do it. You're not supposed to read my mind, but you are supposed to come up with something doable and not overly complicated.)

In the mirror lake application above, how do I make the sky? Be careful. And here's a much harder question: How do I achieve the mirror effect? (It's not faked. It's recomputed in every frame to automatically handle any changes to the scene.)

On each animation frame, the algorithm does the following 11 steps, which are listed in alphabetical order. Put them in chronological order (the order in which they happen in the algorithm), and delineate which steps happen in the vertex and fragment shaders.