Taking Pictures


Eventually, you are going to want to take your own pictures and get them into the computer. There are a couple of sneaky ways to do this if you only need one or two. The department once had a scanner, hooked up to an old Mac in the software lab (727). As of 01/2006 it appears to be gone. I'm not sure what happened to it. Scanners can output a variety of formats. If you want to use my image processing libraries, and some other programs around the department, you need UN-compressed tiff (not compressed as often recommended), since some programs can't handle the compressed format. The campus multimedia center also has a scanner, or you may have your own, and you can scan a picture, store it as a file on portable media, bring it back, and read it into our system assuming you can find a machine with a reader that can handle whatever you stored.

Don't scan currency or official documents, and be respectful of copyrighted material. If you scan in a picture from some publication and use it for class experiments, no one is likely to find out and make a fuss, but if you put it on your web page, there could be trouble, especially if it happens to be a Disney image. (Disney has agents whose job it is to find unauthorized uses of their characters and images, and bring the malefactors to justice.)

There is a digital camera associated with the vision lab, which appears and disappears. Chris Brown is the most frequent steward. If plugged into a USB port it will show up as a file system with stored images. this has even been known to work on Linux boxes. The image format is set with camera controls prior to taking the pictures. Marty has another, and can sometimes be persuaded to take a picture or two and put them on the system. You may also have your own digital camera. Producing or getting pics into a usable format for machine processing is your problem. The images tend to be significantly larger than video resolution (480x640 of 512x512) and some algorithms may be slow. They also tend to be compressed. If you are not careful, you can end up with highly compressed or even 8-bit dithered pallette images. The latter will produce all sorts of nasty artifacts if you attempt to turn it into either gray scale, or 3 band color, and will probably need a lot of cleanup filtering to make it useable in a computer vision application (if it is possible at all). Best advice, read the camera manual.

We have a couple of USB video cameras in the lab. If you plug them into the appropriate port and run the program coriander, (not installed here, but present in some student directories, e.g., ~sanders/Linux/bin and available on the web) you may, with luck, be able to generate some pictures. My experience is that the interface is a bit buggy, and the quality is not what one might hope. (Especially surprising is the fact that supposedly higher quality uncompressed modes produce worse quality images than some compressed ones).

For general picture taking, and applications where image acquisition needs to be accomplished within a program, you currently have to rely on some video-camera, digitizer combination. Doing this the first time, especially in our lab, can be a little bit tricky as there are a number of variables involved. Following is some general information, and some specific hints for using some of the setups.


General Information

To take a picture, you need a camera, a digitizer, a computer, and some software. You also need at least one cable, often more, and sometimes connector conversion plugs. The camera has to be compatible with and plugged into the digitizer, the digitizer has to be compatible with and installed in the computer, and the computer must have software for dealing with the particular digitizer installed. Digital cameras often have usb or firewire connections, and can be instructed to download images to a computer without the need for a digitizer card if appropriate software is installed on the machine.

Compatibility between cameras and digitizers is basically a matter of video format - the camera must output a format that the digitizer can use for input. The BT848-based digitizers, which are in our computational sensors, will take composite, S-video (and probably monochrome) but not RGB. The old KTV digitizers (now only in the nearly obsolete multiprocessors aorta and brain) will work with RGBS and monchrome video, but not composite or S-video. It is possible to muck with an S-video signal to obtain a useable monchrome, but you can't get the color out.

Compatibility between digitizers and computers is a matter of bus architecture. Most commercially available digitizers are designed for PCI bus architectures, which means they will work on PCs and (theoretically) on the new Suns, which use the PCI bus. The main limiting factors for Linux and Solaris is the lack of drivers, which has generally made it impossible for us to use most of the higher-end devices (e.g. the Matrox Meteor series) until they were non-longer high-end.

The Osprey BT848-based digitizers in the computational sensors are PCI devices, and they currently work on our Linux-based PCs, but not on Suns due to the lack of Solaris drivers. Since we have hardly any Suns left (as of 2003) , this is not much of a practical issue. Our old KTV digitizers are SBus devices, which means they work on old Suns. The now defunct maxvideo boards needed to sit on a VME bus, which the 2-generation-old Sun machines used. To use them on the one-generation-old suns, we needed to use an SBus to VME converter (which often had problems).

Software is specific for machine architecture, operating system, and device. We can get PCI bus digitizers cheap, (the Osprey BT848 cards are about $130) and in theory they will run in current generation (2003) Suns, but there are generally no drivers available (only for PC architectures). Since the internal protocol of these digitizers is often not public, or at least not easy to get, writing our own drivers is tricky. Since we have recently moved most of our research computation to Linux-based PC architectures, this is less of a problem than in the past, though similar issues hold for Linux, even running on x86 (PC) architectures. For example, as of 01/2006 the digitizers do not run under the current Fedora system due to changes in the Video4Linux (V4L) standard. Hence the computational sensors are still running under the old RedHat OS. The software interacts with the digitizer to obtain the the digital values of the pixels, and then writes a representation of the image to disk in some file format (e.g. tiff, pgm, gif etc.) In general the user does not see either the image representation employed by the digitizer, or the internal representation that the software uses to produce the output file.


Taking pictures using the BT848-based digitizers

We have several PCI-bus digitizers based on the Brooktree 848 (BT848) digitizer chip installed on Linux x86 boxes in the lab. Most of these are currently on the "computational sensors" - platforms with shark, shellfish, and whale names in the vision lab and the software lab.

As of 01/2005 there are about 16 computational sensors in the department give or take a few. Most are in the vision lab, 4 are in the software lab (2 currently usable) and 2 are in RNs office. These are Dell Boxes with twin Xeon Processors, and about 1G of memory. There are 3 generations, with different speed processors. The fastest are the "whale" machines at 3.0 GHz. The shellfish are 2.0GHz, and the sharks are 1.5GHz. Lobster, Mussel, Clam, and oyster are headless and tied to various cameras in the ceiling. They are accessed remotely, or via a console switch located near the processor rack. Shrimp is set up with a monitor as a workstation as are 3 of the loder shark machines. These can be attached to cameras as needed. All the machines have approximately 300 GB of internal RAID hard disk capacity, accessible at /localdisk. You should use this if you record video sequences, as it has faster access than the network storage, and you won't screw up the file systems by choking the partitions with huge amounts of video imagery (this is surprisingly easy to do)

The digitizer cards are in the external PCI slots and have a row of 2 or 3 (yellow-core) RCA (plug-in) connectors for composite input and 1 (7-pin female) plug for the S-video input. (Why the S-video plug has 7 pins when the cable has only 4 I don't know.) Unlike the KTV cards, the BT848 cards have no video output, so you can't display what they are seing on an external monitor (though you can still split the input signal). Also, no more than one process can use the card at once, so you can't run bt_showlive and then take a picture with bt_get_image without terminating the showlive program. Fortunately you don't need to, as bt_showlive is augmented to "take_picture" when the righthand mouse button is hit.

There are a set of programs that are useful for viewing and acquiring data from these digitizers. Useful programs (in ~nelson/bin/PCLinux) are:

The digitizers can also be accessed under program control using various routines. Take a look at the source code for bt_showlive to see how. (it's in ~nelson/programs/src/digitizers/bt848/bin).

bt_showlive

This program puts a live view of what is going into the bt digitizer on your screen, and allows you to acquire an image of the current display by hitting the righthand mouse button. It also allows you to acquire a video sequence (represented as a numbered series of individual image files in a directory). The program xshow_video (in ~/nelson/bin/PCLinux) can be used to view videos taken by this method. It defaults to a half-size monochrome image running at a maximum of 15 frames per second, but various options permit color images, different sizes, and different frame rates. Use -h to see available options, of which there are many. If you set your DISPLAY environment correctly, you can run this program remotely and use it to spy on whatever the camera happens to be looking at at the moment (if it happens to be on).
  usage: bt_showlive [options] 
  Options
    -c0, -board0 = (camera) digitizer board 0 (default)
    -c1, -board1 = (camera) digitizer board 1 
    -svideo; -channel2 = S Video input (default)
    -comp0;  -channel0 = composite input 0
    -comp1;  -channel1 = composite input 1
    -comp2;  -channel3 = composite input 2
    -color = show color image; slow
    -monoR = take mono image from Red channel 
    -monoG = take mono image from Green channel (default) 
    -monoB = take mono image taken from Blue channel 
    -fps  = frames per second (max) (default 15)
    -spf  = seconds per frame (min) 
    -nr  = number of rows (default 480)
    -nc  = number of columns (default 640)
    -sr  = start row (default 0)
    -sc  = start column (default 0)
        Above dimensions are prior to subsampling operation
        which defaults to a factor of two (half-size)
    -full = full-size image (no subsampling)
    -half = half-size image (default)
    -fourth = fourth-size image
    -ifb  = image file base name for any grabbed images
         (default SHOWLIVE_TEMP)
    -ift  = image format tag (default .tiff)
    -record  = save video frames to directory
    -vfb  = base name for video frames (default VFRAME)
    -vlimit  = set video file storage limit to specified
        number of Mbytes (default 1000)
    -h = print this message
  

If you are running locally, there is a faster, direct-mapped, live-video program for these cards. Unfortunately I can't remember where it is or what it is called at the moment. You are welcome to try to dig it up if you are inclined

bt_get_image

bt_get_image acquires monochrome or color images from the BT848 digitizer and outputs it in one of several formats (currently it handles iff, tiff, pgm, and cvl). Its functionality has been mostly subsumed by bt_showlive, so the program is primarily of use only if you want to acquire images without running video on your screen. The file format is determined by looking at the suffix of the output file, and defaults to iff if no suffix is recognized.
    bt_get_image [options] outfile
    Options:
      -board0 = digitizer board 0 (default)
      -board1 = digitizer board 1
      -svideo; -channel2 = S-video input (default)
      -comp0;  -channel0 = composite input 0
      -comp1;  -channel1 = composite input 1
      -comp2;  -channel3 = composite input 2
      -color = take color image
      -monoR = take mono image from Red channel
      -monoG = take mono image from Green channel (default)
      -monoB = take mono image taken from Blue channel
  
Note that the default action is currently a monchrome image off the green channel. Note also, that pgm and cvl are not defined for color images, and the program will default to a tiff output format. To get a tiff monochrome image from the green band of digitizer 0 using the S-video input you would type
     bt_get_image testpic.tiff
  
To get an tiff color image from composite channel 0 of digitizer one , you would type
    bt_get_image -board1 -comp0 -color testpic.tiff
  

Taking pictures using the KTV digitizers

The KTV digitizers are RGBS (red, green, blue, synch) SBus devices. They are installed in the a Sun computer chassis, and two sets of four wires hang out the back: RGBS input, and RGBS output. In the grad lab we have four of these left : 2 in Brain, and 2 in Aorta. The inputs and outputs are wired to the central patch panel, and must be acessed from there, since the machines are in another room. The ports have labels like "brain ktv0".

To acquire and image with these digitizers, you need either a camera that produces RGBS, or a monochrome camera. For the a color camera, you connect the RGBS outputs of the camera to the corresponding inputs of the digitizer. For a monochrome camera, you connect the camera output to the green input channel. The digitizer outputs can be connected to an RGBS monitor if you want to see what the digitizer is doing, or use it to put overlays on the video signal. If you just want to see what the camera sees, it is usually easier to connect the camera to a monitor as well as the digitizer, either by splitting the RGBS signals, or using the composite output, which RGBS cameras often have in addition to the RGBS outputs.

The programs for acquiring images with the KTV digitizers are in /u/nelson/bin/Solaris on the grad network. You can also access these digitizers under program control, and there is a set of C library routines for controlling them. Looking at the source for ktv_getimage will give you some idea of how they are used, and there is a (light purple) manual in the lab decribing them more fully.

Useful programs are:

ktv_get_image

ktv_get_image acquires an 8 bit monochrome image from the green channel of the digitizer, and outputs it in one of several formats (currently it handles iff, tiff, pgm, and cvl). The file format is determined by looking at the suffix of the output file, and defaults to iff if no suffix is recognized. Usage is
    ktv_get_image [-s or -g] [-c0 or -c1] 
    Options:
        -s   synch on synch 
        -g   synch on green (default)
        -c0  Use camera 0 (default)
        -c1  Use camera 1 
  
If an RGBS color signal instead of a mono signal is being used, you need to use the -s option to avoid a scrambled image. ktv_getiff is essentially the same program, except it can only output iff images. So to get a tiff format image from digitizer 0 with a monochrome input on the green channel you would type
     ktv_getiff testpic.tiff
  
To get an iff image from digitizer one with RGBS input, you would type
    ktv_getiff -s -c1 testpic.iff
  

ktv_getras

ktv_getras acquires a 3 band color image from the camera and outputs a sun raster (.ras) file. xv can be used to convert this to more general formats such as gif or tiff. This program is useful for getting pictures to put on the net. Usage is:
      ktv_getras 
      Options:
           -m    Use monochrome mode  (instead of default color)
           -s    Use SYNC line for SYNC  (default)
           -g    Use green line for SYNC
           -n    Do not clear or digitize
           -c0   Use camera 0 (default)
           -c1   Use camera 1 
  

ktv_get_color_image

ktv_getiff_color_image gets a 3 band, 8 bit per band, color image, and stores it as a color tiff file (.tiff) or a color iff file (.iffcol). The ipp package can input .tiff or .iffcol images as color_image_type, which lets you write c/c++ programs to manipulate them. Usage is:
    ktv_get_color_image [-s or -g] [-c0 or -c1] 
    Options:
        -s   synch on synch (default)
        -g   synch on green 
        -c0  Use camera 0 (default)
        -c1  Use camera 1 
  

ktv_showlive

This program puts a live view of what is going into the ktv digitizer on your screen. It is like bt_showlive, but without all the added bells and whistles. It defaults to a half-size monchrome image. There are lots of options, including ones to get a color image and change the size and frame rate, but the default values are probably all you will generally need except for -s and -g to specify synchronization and maybe the camera specifiers. The program does not run as efficiently as is possible if there is a direct mapping from digitizer buffer to an X visual, and can be a dog if it has to convert, say, a color image to an 8-bit dithered display, but on the other hand, it is general enough to work with almost any display setup you happen to have. As with bt_showlive, if you set your DISPLAY environment correctly, you can run this program remotely.

Taking pictures using the SGI digitizer

As of 2003, we don't have any SGI machines in the vision lab, but the section is retained for historical interest, as a start point should we ever acquire new SGI machines, and it might be useful in using SGI machines in the VR lab.

Silicon graphics makes digitizers that interface with their proprietary graphics bus. The SGI machine in the vision lab (carbon) has one of these digitizers installed. It is a digital video device with two input and two output channels. The unit is capable of digitizing both channels simultaneously, providing they are synchronized. The output channels can let you see what the digitizer is doing and can be displayed on an NTSC monitor. (Actually they can display quite a bit more when used in conjunction with the SGI graphics capabilities, but that is another story).

The SGI digitizer is accessible from a small box attached to the (purple) processing unit by a short cable. None of our cameras generate digital video, hence input to (and output from) the SGI digitizers must be run through an analog to digital conversion system. This is a box sits under the video monitors beside the SGI console, and converts between NTSC (composite) video and the SGI digital standard. This box is wired to the central patch panel, and you should access it from there. The ports are labeled "carbon".

Theoretically, you should be able to feed a composite signal into the patch panel input and have everything work. Unfortunately, the digital to analog unit has poor analog lock-up circuitry, and will not satisfactorily convert the signals coming out of most of our cameras. To solve this problem, the signal has to be routed through yet another intermediate stage, a device known as a time-base corrector. This essentially takes a noisy signal, digitizes it internally, and produces a cleaned up version. It is useful for digitizing from a videotape output as well (which none of our digitizers will lock to). This unit is also accessible via the patch panel with inputs and outputs labeled "TBC".

To take a picture using the SGI digitizer, you need to plug find a camera providing composite (or monochrome) output and plug its output into the time base corrector intput. The corrector panel switch must be set on "normal" not "bypass" Plug the TBS output into one of the "Carbon" input channels on the patch panel. This connects you to the SGI via the analog to digital video converter. The program used to acquire and store an image is called "capture", and is part of the SGI system software. Type "capture" on the SGI console, and point and click in the GUI as directed. (Basically you have to get an icon of a camera in the mode position, and click on the "record" and then the "stop" buttons.) The output is in SGI rgb format, which can be converted to other formats using the program xv. (Yes it is a real mess, and a royal pain)

Actually, capture is a generic, interactive program that can acquire movies and sounds as well as images. There is a man page you can read if you are interested in advanced capabilities. You must be logged onto carbon's console to use capture, as the graphical user interface does not export - at least not to the suns.


Back to vision course main page