2022 November 13,

CS 311: Exam A Solutions

A.1. Conceptually, the camera consists of a projection and an isometry. The isometry consists of a rotation and a translation. The translation is the camera's position. This makes sense, because the camera is naturally at the origin until we use an isometry to place it, and that isometry translates the camera from the origin to wherever its final position is. In practice, we usually configure the isometry using camLookAt or camLookFrom. In the latter's code, one can explicitly see the isometry's translation set to the camera's specified position.

A.2. In general, the vertex shader transforms attributes (which come from the mesh) into varyings (which are passed on to the rasterizer-interpolator). As input it also takes the uniforms and some meta-data. A practical example of a vertex shader is one that uses the modeling and camera matrices to transform the vertices from local coordinates to clipping coordinates.

B.1. In my implementation of clipping, I do not have a helper function to detect whether a vertex is clipped. I have a helper function to compute new interpolated vertices. I have a helper function that asks one or two triangles to be emitted (depending on the number of vertices clipped). I have a helper function that, for a specific triangle, performs the viewport, the homogeneous division, and the call to triRender.

B.2. We need to perform the viewport and the homogeneous division. They can be done in either order, but the output cannot alias the input for the viewport transformation. So I mat441Multiply out to a temporary variable and then vecScale back into the varyings.

B.3. The shader program contains function pointers to the vertex shader and the fragment shader. It also contains some meta-data: the dimensions of the uniforms, attributes, and varyings, and the number of textures. The idea is that this data structure should contain a complete description of the shader program.