next up previous contents
Next: 4.2 Model Estimation Up: 4 Uncalibrated Jacobian Based Previous: 4 Uncalibrated Jacobian Based

4.1 Visual Measures

  The purpose of the visual measures is to transform the intensity images into a more compact and descriptive space, while still capturing the pose of the object. Unlike a traditional calibrated setting, there is no need to exactly model how the visual measure captures the object pose. This freedom allows, in addition to feature location measures traditionally used, certain less traditional, non-geometric measures such as the filter-based perception vector described in Section 4.1.2 or the very specialized, hard to model measures we describe in [Kutulakos and Jägersand, 1995], used to bring about rotational viewpoint changes for object shape exploration in an active vision system. We impose these requirements on the visual measure:

  1. The visual-motor transfer function needs to be smooth, that is the set of visual measures in the perception vector tex2html_wrap_inline3464 should be chosen so f in tex2html_wrap_inline3466 is smooth.
  2. The perception vector needs to capture the pose of the object(s) we wish to manipulate. This means that the visual-motor Jacobian J should be non-singular, and its condition number tex2html_wrap_inline3756 should be small. Note that manipulation in a near singular configuration with a real physical manipulator typically is much harder than the numerical simulation of the same system because of the relatively inaccurate response of the real manipulator.
  3. Preferably the perception vectors should be a compact description of the relevant visual information. We think a reasonable number of feature values would be on the order of 10 to 100 for 6 DOF manipulation.

4.1.1 Feature Based Measures

We use real time visual feature trackers of three different kinds. Deformable contours ``snakes'' [Terzopoulos and Szeliski, 1992, Curwen and Blake, 1992] are used to track surface discontinuities. A locally developed template matching tracker tracks multiple local features from surface markings or corners. For reliability in repeated experiments, or to deal with smooth featureless surfaces, such as the light bulb in Fig. 7.7, we use special purpose trackers, which track attached surface markings or small lights. To improve tracking, viewing geometry models are widely used. For instance the Oxford snake package uses an affine model to constrain the motion of the spline control points to rigid 3D deformations, and a strain energy model for non-rigid image plane deformations. The convolution trackers use point velocity for prediction. The initialization of the snake and template based trackers is done manually. In the tele-assisted ``learning'' method, the operator ``points out'' what to do in the image, and the highest-contrast tracking goals are found in the vicinity of these points by the tracking initialization routine.

The image Jacobian provides a new model for tracking. As noted in Section 4, the subspace of possible solutions tex2html_wrap_inline3758 to the equation tex2html_wrap_inline3736 is in tex2html_wrap_inline3762 rather than tex2html_wrap_inline3764 ( tex2html_wrap_inline3698 ). In our active framework the robot agent also knows along which direction tex2html_wrap_inline3542 the system changes. This leaves only a one dimensional search space along tex2html_wrap_inline3770 in feature space. Note however that we cannot simply constrain the tracker output to this space. That would take away the innovation term in our model updating, and the system would no longer adapt its model to a changing environment. Instead we use tex2html_wrap_inline3772 to detect outliers (e.g. stemming from occluded features or the tracker tracking the wrong thing) and to constrain the tracking search window to a small ``cylinder'' around tex2html_wrap_inline3772 . In future developments we intend to use the predictor in a more general Kalman filter.

4.1.2 Eigen-Image Subspace Based Filters

  The idea in the eigen-image subspace method is to project the raw image intensity values onto a basis of m eigen-images. Representations based on this idea have been used in addressing recognition problems (``what'')[Nayar et al., 1995, Turk and Pentland, 1991] and location determination problems (``where'')[Rao and Ballard, 1995]. There are several ways to choose the eigen-images. In our case we will be looking at the same agent, in different poses, and all the images we want to represent are fairly similar. In this case it is advantageous to use a basis specifically designed for the agent. In summary (see also [Nayar et al., 1995, Turk and Pentland, 1991]) this can be done by acquiring a (large) number p of size tex2html_wrap_inline3780 images tex2html_wrap_inline3782 of the agent in different poses. Let the mean image tex2html_wrap_inline3800 , and for each image in the data set form the difference image tex2html_wrap_inline3786 . Form a measurement matrix tex2html_wrap_inline3788 , and calculate the covariance matrix tex2html_wrap_inline3790 . The principal components of this data are the eigenvectors of the matrix C. The eigenvectors form an orthogonal basis for the original image set, accounting for the variation in the data in decreasing order, according to the corresponding eigenvalues. A dimensionality reduction is achieved by using instead of all tex2html_wrap_inline3794 eigenvectors only a subspace of say the first tex2html_wrap_inline3796 eigenvectors. For practical reasons usually tex2html_wrap_inline3798 , and the covariance matrix C will be rank deficient. We can then save computational effort by instead computing tex2html_wrap_inline3800 and using the p eigenvectors tex2html_wrap_inline3804 of L to form the m first eigenvectors tex2html_wrap_inline3810 of A by tex2html_wrap_inline3814 , where tex2html_wrap_inline3816 .

After a basis has been acquired (which for a particular agent typically only needs to be done once), any new image tex2html_wrap_inline3818 can be represented in this basis as an appearance vector

(11)  displaymath3821

and a given tex2html_wrap_inline3464 can be transformed (with some quality loss) into a corresponding image by the inverse formula

(12)  displaymath3823

4.1.3 Disparity Based Measures

Previous image based view synthesis methods have often been based on dense disparity maps. The objective has been to synthesize different viewpoints on a 1D viewing circle, or possibly a 2D viewing sphere [Werner et al., 1995, Seitz and Dyer, 1995, Laveau and Faugeras, 1994]. An advantage of using disparity is that considerable research effort has been spent on developing good ``stereo'' vision algorithms, in which finding the disparity between the images in the two cameras is usually a crucial step. We base our method on an algorithm by Cox [Cox et al., 1992], which uses a Bayesian maximum likelihood (ML) approach to do a matching based on image intensities, searching along the epipolar lines in two images.

We cannot directly apply these disparity methods. While in previous view synthesis work, image motion caused by a change in viewpoint is along an epipolar line, we study a multi DOF articulated agent, where image motion can occur in many directions. We have instead tried an approximate method, based on image rectification [Seitz and Dyer, 1995], aligning the scan lines in the rectified images with the major direction of motion. Disparity is then measured along this line only with the 1D disparity algorithm. This approach works relatively well for simulating our PUMA robot, where the rigid links are attached to each other, and often the major motion is caused by the two joints in the base, and the other joint motions can be seen as small corrections to this big motion.

Given the disparity map tex2html_wrap_inline3826 represented in a column vector, between the view tex2html_wrap_inline3828 at time k-1 and the view tex2html_wrap_inline3832 at time k, for the 1D case with which we experiment, the appearance vector tex2html_wrap_inline3464 is chosen so that tex2html_wrap_inline3838 . This determines the set tex2html_wrap_inline3840 up to a constant vector. The constant vector is set with reference to a special view, which for the on-line case is the current view, and for the off-line case is a small set of reference views. The inverse problem of getting the image back from the appearance vector, assuming no occlusion, is solvable given reference image tex2html_wrap_inline3832 (in the on-line simulation case discussed later, this is just the current rectified real image) and a desired change in visual appearance tex2html_wrap_inline3628 using the equation

(13)  displaymath3849

where the vectors are indexed in the corresponding 2-D image coordinate locations (i,j). The synthesized image tex2html_wrap_inline3848 is then ``unrectified'' into the original camera frame, and can be displayed.


next up previous contents
Next: 4.2 Model Estimation Up: 4 Uncalibrated Jacobian Based Previous: 4 Uncalibrated Jacobian Based

Martin Jägersand