CSC 249/449 Computer Vision: Camera Characteristics Assignment


Weighting factor 4.

The object of this assignment is to carefully measure and document a number of characteristics for different cameras and lenses. In particular, we are interested in quantifying the noise characteristics and photometric response for several different chip/digitizer combinations, and the off-axis dimming and non-perspective distortion for several different lens/camera combinations.

In order to get the work done, the class will be split into teams people, with each team taking responsibility for one problem. Each team should hand in a writeup of their methodology and results. Each team will also give a short (5-10 minutes) presentation of their results at the beginning of class on the due date.

Part of the exercise is figuring out what it is reasonable to measure, how to measure it, and how the measurements should be represented. There are some ideas about possible methods in the following description of each problem, and some issues came up in the class discussion, but these should not be taken as representing the only, the best, a complete, or even (necessarily) a workable solution. Ultimately, coming up with something sensible and useful is up to the creativity of your team.

Camera Noise Characteristics

In class, we discussed how it might be useful, or even essential, to have some model of the uncertainty associated with pixel values in a digital image. We discussed how the uncertainty might depend on various factors, including camera, illumination level, chip gain, type of lighting (flickering fluourescents vs. stable incandescents), camera temperature, time elapsed since turn-on, and even image neighborhood.

Your job is to explore the dependence of uncertainty on some of these parameters. At a minimum, you should look at 2 different cameras (more would be even better), and the dependence on illumination and type of lighting. For some of the cameras, you may be able to explore high and low gain regimes by turning on the automatic gain control and varying the overall lighting. You might want to distinguish between intrinsic variation in a single pixel response and local (deterministic) variation in the sensitivity of neighboring pixels. The latter is an effect that could potentially be corrected for (on a chip by chip basis), but more generally will just be lumped into an overall uncertainty.

Some properties you might consider quantifying are temporal uncertainty of single pixel response (as a function of illumination level and gain); local spatial variation in pixel (temporal mean) responses at a given brightness - and is this stable and correctable; and characteristics of camera dark current (cameras produce a signal even in total darkness). And there are others.

Standard deviation (as a function of various factors) is a frequently used measure of uncertainty, but you might want to check whether the variation actually matches a gaussian model, or is better described by some other function. Things to watch out for include aliasing artifacts, variation in the sensitivity of individual pixels in a neighborhood, variation over large neighborhoods due to off-axis dimming, digitization artifacts (does hardware really produce 256 levels?), effects of automatic gain control (you can shut this off in some of the cameras with a switch) effects of dark current, and camera warm-up effects.

Team:

Photometric Response

The numerical value produced by a camera/digitizer combination may not be a linear function of the light intensity falling on a pixel (in fact it usually isn't). Your job is to quantify the photometric function for at least 2 (3 or 4 would be even better) camera/digitizer combinations in the lab or otherwise available (e.g. you might be able to run tests on your digital camera or cell phone). One reasonable way of representing this information might be a graph of digitizer output vs. light intensity level. Identification of an analytic functional form would be an interesting addition.

The challenge is to figure out how to make this measurement with equipment you can easily obtain or make. Some suggestions came up in class, including light meters, commercial reflectance calibration charts, and various methods of directly adjusting the light on a patch in known proportion. There may be other possibilities.

Complicating factors include automatic gain control mechanisms in the camera and digitizer electronics and software, and possible automatic contrast adjustment mechanisms. If either or both of these are operating anywhere in the transduction chain, it will be impossible to determine the photometric function from separate, uniform images of different intensity. A single image with regions of different (known) intensity is probably a better bet, though in that case you have to watch out for potential problems due to off-axis dimming. If you make single-pixel measurements, then uncertainty associated with these can be an issue. Automatic gain control may also change the functional form of the photometric response for different levels of overall illumination. At any rate, it is something to be aware of.

Team:

Off-axis Dimming

In camera systems imaging an isotropic scene (e.g. uniform white), the light intensity on the image plane varies as a function of the distance from the center of projection due to a number of interacting factors. For the ideal pinhole camera, the intensity falls off proportional to cos^4 of the angle from the center of projection. For real cameras the falloff is different (usually slower, as lens designers try to produce a uniform image). The task here is to quantify the degree of off-axis dimming associated with a real pinhole, and several camera/lens combinations, and compare it to the theoretical pinhole cos^4 form. In particular, figure out a way to implement a real pinhole camera over one of our CCD chips. One possibility involves the extension barrels for the miniature RGB cameras on the robot head. It might also be possible to attach a tube to one of the spacer rings for the C-mount cameras. Then look at the longest and shortest focal length fixed lenses you can get for the C-mount and the miniature cameras. If you have extra time, you can look at a zoom lens. Lenses are stored in the gray cabinet in the vision lab. For real lenses, you should check for possible effects from the lens aperature (e.g. wide open versus closed).

An obvious methodolgy is to analyze pictures of a uniformly illuminated white surface. Getting this uniformly illuminated surface is not as easy as you might think. There are a lot of variations due to shadows, differing distances from various light sources and light-reflecting objects, and low-level specularity, that are essentially invisible to the eye due to its own adaptive processing, but that can dominate the measurements you take. Moving the camera to check different regions with the central neighborhood is one way of checking (or compensating) for this.

Complicating factors include possible interference from automatic gain control systems, especially if the camera is moved around, and problems with pixel-level uncertainty if values are determined from individual pixels. Be careful if the aspect ratio of your pixels is not 1 to 1. You need to determine the field of view, and at least approximate the relationship of angle to pixel index in order to compare with the cos^4 function (though accurate determination of this is the job the distortion team). If the lens center (center of projection) is not in the center of the chip, then this will affect your measurements, producing an offset of the pattern. A slight tilt of the lens with respect to the chip can also shift the pattern. Untangling the two is extremely difficult, but you should be aware that it is quite likely that the center of your pattern will not be exactly in the center of the chip.

Team:

Non-perspective Distortion

The geometry of images produced by real lenses differs from that predicted by the pinhole model, especially for wide angle (short focal length) lenses. The task here is to quantify this distortion for several lens-camera combinations. At the minimum, you should make measurements for the longest and shortest focal length lenses for the C-mount and miniature cameras. Playing with the zoom lenses could also be fun. Lenses are stored in the gray cabinet of the vision lab.

A challenge is to come up with an intuitve and useful way of representing the distortion. The idea of relating local x and y metrics to the distance from the center of projection was brought up in class. Another idea is simply to functionally relate ideal x and y coordinates to actual x and y coordinates (this function is probably separable). The first representation is related to the partial derivatives of this one. Various graphical representations could also prove useful. Again, they should show the distortion with respect to the perspective model.

An obvious methodology is to take pictures of a regular grid or spot pattern, for which you can compute the perspective projection for your camera. Doing this will probably involve some careful measurements near the center of the field of view (where the perspective model tends to be good for any camera).

Complicating factors include accurately measuring the position of grid lines on a limited resolution image. You might want to contrive the situation so this can be done automatically with simple techniques you dream up (since we haven't studied line or dot detection etc. yet). With care, you should be able to get sub-pixel accuracy in your measurements. Make sure you know and properly account for the aspect ratio. Offset of the center of projection from the center of the image can cause complications by shifting the center of distortion. Other issues arise if your grid is not far enough away to be effectively at infinity, and even in that case, certain (cheap) wide-angle lenses may not be able to achieve simultaneous sharp focus over the field of view.

Team:

Back to vision course main page