2021 February 20,

CS 311: Exam B Preparation

Introduction

Exam B is written/typed, not oral. It covers all course material up to and including textured light. It does not cover shadow mapping. It is focused on the material that was not covered on Exam A. (Recall that Exam A covered everything up to and including depth buffering.)

The exam is intended to take 60 or 70 minutes, but you are given two hours, to allow for technical glitches and lower stress. You can take the exam during any two-hour window in a 24-hour period that runs (tentatively) from Wednesday 8:30 AM to Thursday 8:30 AM.

The exam is open-book, meaning that you can use all of this course's resources: the course Moodle site, the course web site, your notes, your homework, etc. You may not share any resources with anyone else during the exam period. You may not consult other web sites, books, etc. You may not confer with other people.

The open-bookness alters the way that I write the exam. I ask very few questions, whose answers can simply be looked up. I ask more conceptual questions and open-ended, problem-solving questions.

From giving similar exams in other courses, I get the sense that some students rely too much on the open-bookness. They don't seriously prepare for the exam, because they figure that they can look up any information they need. However, looking up information is slow. You need to have most of the course content in your head, so that you can marshal it quickly in solving problems.

To that end, I have prepared some questions for you to study. I have grouped them by difficulty. I apologize if your view of the difficulty differs from mine.

Easier

These questions test your understanding of important concepts. Some of them you could simply look up. In a closed-book exam, a substantial fraction of questions come from this level. In an open-book exam, fewer questions come from this level.

A. Here are some steps of the triangle rasterization algorithm in alphabetical order. Place them in chronological order (the order in which they are executed in the algorithm). Label which ones occur in the vertex shader, which ones occur in the fragment shader, and which ones OpenGL does for us when we use OpenGL.

B. When we apply the modeling isometry to a vector, we apply it differently, depending on whether that vector represents a position or a direction. Why? Your answer should include algebraic verification of whatever claims you make.

C. When we place a camera in our scene, where is that camera? How is the camera's location stored? For example, how would you write code to print out the location of the camera?

D. When we configure our viewport transformation, we configure it based on what information? Why?

E. Why do we clip at the near plane? Why might we clip at the far plane? What tradeoffs are involved?

F. In OpenGL, what is a location? How many kinds of location are there?

G. What are the major changes in features between one version of OpenGL and the next, as explained by our OpenGL tutorials?

H. Suppose that a certain vertex in a certain scene graph node is at position (x, y, z) relative to its parent. Explain in detail how you could compute the world position of that vertex.

Mediumer

To me, these questions are too hard to be called easy, but not hard enough to fit into the third category. An open-book exam draws from this category more than a closed-book exam does.

A. Usually, my homework asks you to add new features one at a time, in baby steps. But I asked you to add the projection and viewport to our software graphics engine in a single step. Why?

B. Imagine a simple 2x2 checkerboard pattern. Imagine a cube, texture mapped so that this checkerboard appears on every side. Draw the cube in three ways: orthographic projection, perspective projection without correct interpolation, and perspective with correct interpolation. Especially in the middle case, you might want to annotate your drawing, making it more of a diagram, so that your reader is sure that you understand the issues.

C. In the instructions for 170mainInterpolating.c, there is another study question about perspective-corrected interpolation.

D. Draw the following graph to explain specular reflection. Let the horizontal axis of the graph be the angle between two crucial vectors. Let the vertical axis be the amount of specular reflection. Draw a curve assuming shininess 1, another curve for shininess 2, and another for shininess 4.

E. Some of our lighting calculations depend on the location of the light and the camera. How, exactly? We have focused on the case of a directional light (far away from the scene, so that light rays arrive at the scene parallel) and a perspective camera (embedded in the scene, so that the sides of the viewing frustum are not parallel). How would our calculations change for a light embedded in the scene? For an orthographic camera? [Hint: There is a reason why I'm asking about the light and the camera at the same time. Use your understanding of one to understand the other.]

F. In our textured light algorithm, we assumed for simplicity that texture coordinates (0, 0), (1, 0), and (0, 1) were assigned to the vertices a, b, c. How must the algorithm change, to allow for arbitrary texture coordinates at those vertices? Also, how much the algorithm change to handle a texture-mapped quadrilateral that is not a rectangle?

Harder

These questions are more open-ended or speculative. Some of them might not work well as questions on an actual timed exam.

A. Wouldn't it be nice to collapse meshGLInitialize and meshGLFinishInitialization into a single, more convenient function? How would you design it? What information would we have to pass to that function? How would you pass that information?

B. Examine 000pixel.c. Which version of OpenGL does it use? Why did I choose that version? Can you figure out how 000pixel.c works? For example, when you call pixSetRGB, how does your RGB color actually get to the screen? What is pixNeedsRedisplay, and why?

C. For modeling transformations we have been using isometries. Each isometry is a combination of a rotation and a translation. What if we also introduced scaling by a factor of s? For example, we could render the mesh five times larger (s = 5) or 14% smaller (s = 0.86). Take my word, that the linear algebra works out easily. (Scaling by s can be implemented as a matrix S with s along its diagonal and zeros elsewhere. This S commutes with rotation matrices R, and the product S R = R S is easy to program. Placing it into the upper-left 3x3 submatrix of a homogeneous 4x4 matrix lets us combine it with translations in the usual way.) How would introducing this scaling feature make our graphics engine more flexible? What problems would it cause? Could those problems be fixed?

D. (This question is probably too hard for students who haven't studied linear algebra. On the other hand, Figure 2 of 000matrices2By2.pdf makes the problem not totally unreasonable.) Continuing the previous question, should we allow even greater flexibility: arbitrary 3x3 matrices M, placed into the upper-left 3x3 submatrix of a 4x4 matrix? Would this cause problems?

E. In my mirror demo (a screenshot of which appears at the bottom of our software rasterization page), there are undesirable artifacts where the water meets the land. Why? How could I fix them?