The purpose of the visual measures is to transform the intensity images into a more compact and descriptive space, while still capturing the pose of the object. Unlike a traditional calibrated setting, there is no need to exactly model how the visual measure captures the object pose. This freedom allows, in addition to feature location measures traditionally used, certain less traditional, non-geometric measures such as the filter-based perception vector described in Section 4.1.2 or the very specialized, hard to model measures we describe in [Kutulakos and Jägersand, 1995], used to bring about rotational viewpoint changes for object shape exploration in an active vision system. We impose these requirements on the visual measure:
We use real time visual feature trackers of three different kinds. Deformable contours ``snakes'' [Terzopoulos and Szeliski, 1992, Curwen and Blake, 1992] are used to track surface discontinuities. A locally developed template matching tracker tracks multiple local features from surface markings or corners. For reliability in repeated experiments, or to deal with smooth featureless surfaces, such as the light bulb in Fig. 7.7, we use special purpose trackers, which track attached surface markings or small lights. To improve tracking, viewing geometry models are widely used. For instance the Oxford snake package uses an affine model to constrain the motion of the spline control points to rigid 3D deformations, and a strain energy model for non-rigid image plane deformations. The convolution trackers use point velocity for prediction. The initialization of the snake and template based trackers is done manually. In the tele-assisted ``learning'' method, the operator ``points out'' what to do in the image, and the highest-contrast tracking goals are found in the vicinity of these points by the tracking initialization routine.
The image Jacobian provides a new model for tracking.
As noted in Section 4, the subspace of possible
solutions
to the equation
is in
rather than
(
). In our active framework the
robot agent also knows along which direction
the system
changes. This leaves only a one dimensional search space along
in feature space.
Note however that we cannot simply constrain
the tracker output to this space. That would take away the innovation
term in our model updating, and the system would no longer adapt
its model to a changing environment. Instead we use
to detect outliers
(e.g. stemming from occluded features or the tracker tracking the
wrong thing) and to constrain the tracking search window to a small
``cylinder'' around
. In future developments
we intend to use the predictor in a more general Kalman filter.
The idea in the eigen-image subspace method is to project the raw
image intensity values onto a basis of m eigen-images. Representations
based on this idea have been used in addressing recognition problems
(``what'')[Nayar et al., 1995, Turk and Pentland, 1991]
and location determination problems (``where'')[Rao and Ballard, 1995].
There are several ways to choose the
eigen-images. In our case we will be looking at the same agent,
in different poses, and all the images we want to represent are
fairly similar. In this case it is advantageous to use a basis
specifically designed for the agent. In summary (see also
[Nayar et al., 1995, Turk and Pentland, 1991])
this can be done by acquiring a (large) number p of size
images
of the
agent in different poses. Let the mean image
, and for each image
in the data set form the difference image
.
Form a measurement matrix
,
and calculate the covariance matrix
. The principal
components of this data are the eigenvectors of the matrix C.
The eigenvectors form an orthogonal basis for the original image
set, accounting for the variation in the data in decreasing order,
according to the corresponding eigenvalues. A dimensionality
reduction is achieved by using instead of all
eigenvectors
only a subspace of say the first
eigenvectors.
For practical reasons usually
, and the covariance
matrix C will be rank deficient. We can then save computational effort
by instead computing
and using the p eigenvectors
of L to form the m first eigenvectors
of A by
,
where
.
After a basis has been acquired (which for a particular agent
typically only needs to be done once), any new image
can be
represented
in this basis as an appearance vector
and a given
can be transformed
(with some quality loss)
into a corresponding image by the inverse formula
Previous image based view synthesis methods have often been based on dense disparity maps. The objective has been to synthesize different viewpoints on a 1D viewing circle, or possibly a 2D viewing sphere [Werner et al., 1995, Seitz and Dyer, 1995, Laveau and Faugeras, 1994]. An advantage of using disparity is that considerable research effort has been spent on developing good ``stereo'' vision algorithms, in which finding the disparity between the images in the two cameras is usually a crucial step. We base our method on an algorithm by Cox [Cox et al., 1992], which uses a Bayesian maximum likelihood (ML) approach to do a matching based on image intensities, searching along the epipolar lines in two images.
We cannot directly apply these disparity methods. While in previous view synthesis work, image motion caused by a change in viewpoint is along an epipolar line, we study a multi DOF articulated agent, where image motion can occur in many directions. We have instead tried an approximate method, based on image rectification [Seitz and Dyer, 1995], aligning the scan lines in the rectified images with the major direction of motion. Disparity is then measured along this line only with the 1D disparity algorithm. This approach works relatively well for simulating our PUMA robot, where the rigid links are attached to each other, and often the major motion is caused by the two joints in the base, and the other joint motions can be seen as small corrections to this big motion.
Given the disparity map
represented in a column vector,
between the view
at
time k-1 and
the view
at time k, for the 1D case with which we experiment,
the appearance vector
is
chosen so that
.
This determines the set
up to a constant vector.
The constant vector is
set with reference to a special view, which for the on-line case
is the current view, and for the off-line case is a small set
of reference views.
The inverse problem of getting the image back from the appearance
vector, assuming no occlusion, is solvable given reference
image
(in the on-line simulation case discussed later,
this is just the current rectified real image)
and a desired change in visual appearance
using the
equation
where the vectors are indexed in the corresponding 2-D image
coordinate locations (i,j). The synthesized image
is then ``unrectified'' into the original camera frame, and
can be displayed.