CSC 249/449 Computer Vision: Eye Finding Assignment


Weighting factor 4.

Finding the eyes for purposes of geometric normalization is a crucial step in many face recognition systems. The goal of this assignment is to find the eyes in frontal, monochrome images of human faces. The data you have to handle are the portrait images of the class that you took in an earlier assignment, and images from previous classes. This can be taken as representative of the variation that occurs naturally when people are asked to provide a good picture of themselves on a relatively uncluttered, background and hence the sort of images you might expect to get in a face identification scenario with cooperative users.

There are a lot of ways that this could be done, and you are, of course, welcome to use any method you feel like implementing except canned eye-finders or previously written generic pattern/object recognizers. The approach based on current class material, however, is to use low-level image processing and pattern matching techniques to solve most of the problem. A number of existing systems basically use such methods, so it is not an impossible task.

Your system should take as input a gray-scale image, say for now a tiff file, display it, and put up two crosshairs showing the best guess of where the centers of the eyes are. You should also output the image coordinates of the eyes. You may assume that there is only one face in an image, and that every image has a face if you want, though you are also allowed to produce a "no eyes found" response for images with no eyes (this would be useful in a real system). If you want to play with the color images, for instance to see if finding flesh tones allows greatly improved performance, feel free, but you should write a system that will work on gray-scale images.

Issues you will have to deal with include variation in scale, illumination, and contrast, some variation in orientation, and some background clutter. One basic approach is to produce one or more "template" patterns representing typical eyes or eyes plus surrounding context (e.g. eyebrows - though there is a lot of distance variation between eyes and brows that might cause difficulty), and look for these in the image using appropriate normalization and matching. Templates could be gray-scale masks, or they could be more sophisticated (involving, say, local edge orientation). Having found single eye candidates, you could then filter them by looking for appropriately spaced and scaled pairs (this is a very strong constraint, and probably essential for success using a single-eye approach) You might also consider templates for pairs of eyes.

You could produce "template pattterns" from prior knowledge (e.g. eyes are typically dark and elliptical) or you might produce them from appropriately normalized samples from real images (it's OK for you to "clip out" the eyes used for such training manually). If you take this approach though, you must test your system on faces that did not contribute to the training set.

Issues include determining what scale the patterns are best represented at (possibly not full scale, especially if you want speed), and how to handle variation in face size in the test suite (if one scale is not enough you can scale the image or template run search at multiple scales). Then there is the question of what sort of matching to use, and how to normalize patterns or image patches prior to matching (if necessary). If you use gray-scale templates, you will need to deal with variation in illumination as well as location, scale, and orientation. Techniques for handling illumination variation range from normalization of mean values, to rank order representations. Your system also has to handle natural variation in the appearance of eyes in a face, and minor changes in orientation. Eyeglasses could be a problem.

There are other approaches/variations. You could consider basing your patterns on edge enhanced images, or images that have been filtered or enhanced in some other way. You could try to use binary patterns extracted from such filtered images (e.g. edgel maps) which would allow for very fast matching methods (though you would have to muck with bit-level tricks and representations to really exploit the potential efficiency). You could try to train a neural net (except it would be slow and I haven't talked about such things yet, so I wouldn't recommend it, unless you have used them previously and know what you are getting into).

Speed is always an issue, and you should try to come up with algorithms that are fast, though for this project that is not the primary goal. What is fast? Well, suppose you have a 3GHz processor. Really fast is a frame interval - 1/30 of a second. I don't expect you to get close to this, though there are research groups that are trying. Moderately fast is 1 second (not counting file-read time). If manage this and get it to work well I will be really impressed. 10 seconds is pretty good, and one minute is OK for this demonstration though too slow for a practical system. 10 minutes is a real dog, and one hour - well, don't hog the machines for that long.

You should work as individuals on this assignment. Your writeup should include a detailed description of your algorithm, and a discussion of the problems you encountered and how you resolved them. You should include tables of performance on the test images (success or failure for each image) and indicate which, if any, of the images were used for training. If you use the faces from the class for training, you will need an independent test set. Images for training and testing can be obtained from current and previous years' classes in ~nelson/pics/faces_449/*. If you want more test data, there is a set of small portraits in ~nelson/pics/people/md_faces. These might be outside the scale range your system works for, and would need to be magnified by a factor of 2 to be usable. Report how much time your algorithm uses, and where it is allocated. If there is a time vs. accuracy tradeoff in your system, this should be described as well.

The ipp image libraries have C routines for reading and writing image files into a core format where you can get hold of the pixels, displaying them, and miscellaneous low-level image-processing operations (such as scaling and smoothing). You can experiment with some of these operations using the program image_calc in /u/nelson/bin/(Solaris|PCLinux), though this is somewhat of a work in progress, and is not yet hooked up to all the available operations. More information is on the resource pages, and after that, in Readme and include files. You are free to use any of this (except the object recognition system, which is not really appropriate anyway).

Back to vision course main page