2022 November 13,

CS 311: Exam B Solutions

2. A mesh lives in CPU memory, where we can easily configure and modify it. A vesh lives in GPU memory, where the GPU can access it quickly during rendering. When we initialize a vesh from a mesh, the data are immediately copied over to buffers on the GPU. After that copy, the mesh itself can be finalized (if we don't need it for some other purpose).

6. Our bodies are designed so that we can pair any kind of geometry (sphere, plane, etc.) with any kind of material (Phong-lit, mirror, etc.) This is why getMaterial must be separate from the other two shaders. The other two shaders use the same information as each other, but they are kept separate for performance reasons. getIntersection is called on every body, while getTexCoordsAndNormal is called only on the winning body. If we combined them, then the intersection calculation would be unnecessarily expensive.

As for why we don't split them up further: They're pretty small, so there's not much reason. If we split them, then we'd have more function pointers to configure, and some of the functions would re-compute some of the data.

Upon further reflection, there is an argument for splitting the material into three parts — Phong lighting, mirror, transmission — so that we have even more freedom to mix and match body configurations. But I suspect that many of the combinations wouldn't make physical sense.

3. Except for minor differences in formatting, the uniforms, attributes, and varyings in the given shaders all would be reproduced pretty faithfully in the software implementation, as follows.

Both the scene-wide uniforms and the body-specific uniforms would be in the uniforms. There would be two 4x4 matrices for modeling and camera. The seven vec4s would be stored as 3-dimensional vectors. (We have them 4-dimensional in Vulkan just because of the memory alignment issue.) The texture indices would be absent, because that's not how we make textures body-specific in our software engine. (Moreover, our software engine doesn't allow integer uniforms.) So unifDim would be 2 * 16 + 7 * 3 = 53.

The attributes are determined by our meshes. Ever since 250main3D.c, we've been using attributes XYZ, ST, NOP, for an attrDim of 3 + 2 + 3 = 8.

The varyings shown in the shaders have dimensions 2, 3, 3, 3. However, an implementation in our software engine would use five more varyings, as follows. First, gl_Position is also a varying — so important that we don't even get to declare it, and must write to it — which corresponds to the XYZW varyings in our software engine. Second, we need one additional varying to perform perspective-corrected interpolation. So the total varyDim would be 4 + 1 + 2 + 3 + 3 + 3 = 16.