2016 October 13,

CS 111: Assignment: Image Processing

Introduction

This assignment describes further examples of image processing, that are used in the entertainment industry. If you haven't done the assigned reading from Chapter 6 of our textbook, consider doing so. It might make this assignment easier. You may complete the assignment alone or with a partner.

Download a fresh copy of imageProcessing.py. You will edit and submit this file electronically, as usual, by Friday at the start of class.

(By the way, if you want to do this assignment on your own computer, then you will need to install the latest version of the Python Imaging Library (PIL), which is confusingly called Pillow. On macOS, you can do this by executing the command pip3 install Pillow in bash. On Windows there is surely some similar process, but I don't know it. Search the web or ask the staff in CMC 305?)

Green-Screening

Green-screening is a technique used in movies and television, to superimpose video of actors on video of a background scene, in such a way that the actors appear to be in the scene. We'll do just one frame of video; that is, we'll superimpose a single actor image on a single background image. Here's an example with a particularly simple image:

Download this actor image and background image to your computer; you'll use them for testing your green-screener. (You can of course try your green-screener on any other images that you find. Just make sure that the actor and background images have the same width and height.) Then, in imageProcessing.py, write the following two functions.

def greenScreenRGB(rgbActor, rgbBackground):
	'''Given two RGB colors, returns one or the other, depending on whether the first one is "green".'''

def greenScreenedImage(actorImageWindow, backgroundImageWindow):
	'''Given two image windows (with images of the same size), returns a new image window, formed by green-screening the actor against the background.'''

The second function is not difficult. You can write it in two lines, using imageThreaded and the first function. The first function is more difficult, because the actor's background is not perfectly uniformly green. That's why, in the example shown above, some green background leaks into my final image. Your greenScreenRGB function needs to detect a swath of green-like colors. Try to do better than the example shown above.

Edge Detection

In this part of the assignment, you'll write an edge detector, which finds all of the edges in an image. Basically, an edge is a sudden change of color. Here's an example:

Our textbook describes an edge detection algorithm, but I would like to do it differently, for a few reasons. First, I want you to understand the algorithm well enough that you can complete this version and then compare it to the book's version. Second, the book's version takes in an RGB image and immediately converts it to grayscale; that is undesirable, because it discards two thirds of the color content of the image. Third, I wish to emphasize that you can build an edge detector from our established basic tools imageMapped, imageThreaded, and imageConvolved, instead of writing completely new code.

Here is the basic idea of edge detection, as discussed by our book. Convolving an image with the kernel [[-1, -2, -1], [0, 0, 0], [1, 2, 1]] detects horizontal edges. In more detail, the kernel [[-1, -2, -1], [0, 0, 0], [1, 2, 1]] produces large positive red values wherever the red content of the image increases sharply from top to bottom, and large negative red values wherever the red content decreases sharply from top to bottom. Unfortunately, our images can't store negative red values, so we have to handle them in a different way. We use the kernel [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] to produce large positive red values wherever the color content of the image decreases sharply from top to bottom. And the same goes for the green and blue channels. Wherever the color content increases or decreases sharply, we should detect an edge. This leads to the following algorithm for detecting horizontal edges.

  1. Convolve the image by the first kernel, to detect sharp top-to-bottom increases in each of the RGB channels.
  2. Convolve the image by the second kernel, to detect sharp to-to-bottom decreases in each of the RGB channels.
  3. Combine these two partial edge images using imageThreaded. In each of the RGB channels, the threaded function should return the maximum of the values from the two images.

The algorithm for detecting vertical edges is identical, except that it uses the kernels [[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]] and [[1, 0, -1], [2, 0, -2], [1, 0, -1]] to detect sharp left-to-right increases and decreases in color content, respectively.

Once you've computed a horizontal edge image and a vertical edge image, you can combine them (again using imageThreaded) into a single edge image that captures horizontal, vertical, and diagonal edges. What exactly is the threaded function, that acts on each pair of pixels? In each of the three RGB channels, you have a measure of horizontal "edginess" and a measure of vertical edginess. For the sake of argument, call these six edginess numbers hr, vr, hg, vg, and hb, vb. Then an edge runs through the pixel if

hr2 + vr2 + hg2 + vg2 + hb2 + vb2t2,

where t is some fixed threshold number. Your book recommends a threshold value of 175 (where?). This threshold works well for me, but I invite you to try other threshold values, to see what effect they have.

In imageProcessing.py, write the following function, using the algorithm outlined above. My version consists of four calls to imageConvolved and three calls to imageThreaded, and nothing else. Of course, you will have to write helper functions to pass to these functions.

def edgeImage(imageWindow):
	'''Given an image window, returns a new image window capturing the edges in the original image.'''

Be sure to test your edgeImage function. Testing it can take a while, if the image you're using is large. You may want to run some of your tests on this smaller image.

Inking

In this last part of the assignment, you will write an inker, which is a function that colors all of the edges in an image black, so that the image looks like it was drawn in ink. This technique is used in many contemporary cartoons, including Futurama and The Simpsons. A complicated background scene, such as a city with many buildings, is difficult to draw by hand, especially if the camera is supposed to move among the buildings in 3D. So these shows render many of their backgrounds using 3D graphics software, post-process these computer-generated images to make them look as if they're drawn in ink and colored by hand, and then superimpose hand-drawn actors. You can see this somewhat in the following still image; the figures are hand-drawn, but the background is computer-generated and post-processed to look like a cartoon.

Your job is to write an inker for 2D still images. Here is an example:

If you've done the rest of this assignment, then you can probably figure out exactly how the inker works. Implement the following function in imageProcessing.py, making as much use of your already-written functions as possible. My version is only two lines of code, plus a helper function.

def inkedImage(image):
	'''Given an image window, returns a new image window with the edges inked.'''

Polish and Submit Your Work

Remember that at the top of your program file there should be a comment that lists the names of everyone who contributed to the code.

The grader will grade your code by importing it into another Python program and invoking your versions of greenScreenedImage, edgeImage, etc. Therefore your code must conform exactly to its specification. Each function should be named exactly as I've specified, should take exactly the inputs I've specified, and should produce exactly the output I've specified. At this point in the course, there is room for creativity in designing how the code fulfills its specification, but no room for creativity in choosing the specification itself. (Later you will be able to get more creative.)

After testing your code, the grader will read through your code, to make sure it is sensible. Your code is more understandable when it is sprinkled with short comments explaining the tricky bits.