Our method has a learning and an image sequence synthesis stage.
In the learning stage the problem we solve is: Given a (time) sequence of
intensity images
and the corresponding sequence of
motor
states
of an articulated agent, find an approximate
invertible mapping
between
image space and motor space, so that (for the training set):
To make the problem tractable we divide
into two parts
, where
is in
a space of appearance vectors. We will show how to estimate
(learn) the function
from watching a moving articulated agent.
For images of size
,
, and an agent
with n freedoms (
) we choose the appearance vector
space
so that a significant dimensionality reduction
is achieved. For example in the two joint robot arm simulation in section
3.1,
, m = 24, n = 2.
Figure 1 outlines our method. During training,
sequences of intensity images are transformed into sequences of
appearance vectors by either a disparity or a subspace method
described in the next section. The appearance vectors and
the corresponding motor vectors are used to estimate the visual-motor
model
either
on-line while the training images are acquired (section
2.2) or off-line
(section 2.3.2) from a batch set of images.
Figure 1: Overview of appearance based view synthesis.
In this section we briefly review the principles of the two visual front ends we have experimented with. We require visual representations to be: (1) Approximately invertible so reasonable quality images can be reconstructed. (2) Well behaved (smooth) w.r.t. motor space so that the visual motor model can be estimated. We also prefer representations not restricted to a particular class of agents, or requiring a-priori models.
The idea in the subspace eigen image method is to project the raw
intensity values onto a basis of m eigen images. Representations
based on this idea have been used for recognition problem ``what''[1, 3]
and location determination problem ``where''[2].
There are several ways to choose the
eigen images. In our case we will be looking at the same agent,
in different poses, and all the images we want to represent are
fairly similar. In this case it is advantageous to use a basis
specifically designed for the agent. In summary (see also
[1, 3])
this can be done by acquiring a (large) number p of size
images
of the
agent in different poses. Let the mean image
, and for each image
in the data set form the difference image
.
Form a measurement matrix
,
and calculate the covariance matrix
. The principal
components of this data are the eigenvectors to the matrix C.
The eigenvectors form an orthogonal basis for the original image
set, accounting for the variation in the data in decreasing order,
according to the corresponding eigenvalues. A dimensionality
reduction is achieved by using instead of all
eigenvectors
only a subspace of say the first
eigenvectors.
For practical reasons usually
, and the covariance
matrix C will be rank deficient. We can then save computational effort
by instead computing
and using the p eigenvectors
of L to form the m first eigenvectors
of A by
,
where
.
After a basis has been acquired (which for a particular agent
typically only needs to be done once), any new image
can be
represented
in this basis as an appearance vector
and a given
can be transformed
(with some quality loss)
into a corresponding image by the inverse formula
Previous image based view synthesis methods have often been based on dense disparity maps. The objective has been to synthesize different viewpoints on a 1D viewing circle, or possibly a 2D viewing sphere [5, 6, 7]. An advantage of using disparity is that considerable research effort has been spent on developing good ``stereo'' vision algorithms, in which finding the disparity between the images in the two cameras is usually a crucial step. We base our method on an algorithm by Cox et al [4], which uses a Bayesian ML approach to do an image intensity based matching of image features searching along the epipolar lines in two images.
We cannot directly apply these disparity based methods. While in previous view synthesis work, image motion caused by a change in viewpoint is along an epipolar line, we study a multi DOF articulated agent, where image motion can occur in many directions. We have instead tried an approximate method, based on image rectification[6], aligning the scan lines in the rectified images with the major direction of motion. Disparity is then measured along this line only with the 1D disparity algorithm. This approach works relatively well for simulating our PUMA robot, where the rigid links are attached to each other, and often the major motion is caused by the two joints in the base, and the other joint motions can be seen as small corrections to this big motion.
Given the disparity map
represented in a column vector,
between the view
at
time k-1 and
the view
at time k, for the 1D case we experiment
with, the appearance
vector
is
chosen so that
.
This determines the set of
up to a constant vector.
The constant vector is
set with reference to a special view, which for the on-line case
is the current view, and for the off-line case is a small set
of reference views.
The inverse problem of getting the image back from the appearance
vector, assuming no occlusion, is solvable given reference
image
(in the on-line simulation case discussed later,
this is just the current rectified real image)
and a desired change in visual appearance
as
where the vectors are indexed in the corresponding 2-D image
coordinate locations (i,j). The synthesized image
is then ``unrectified'' into the original camera frame, and
can be displayed.
The visual appearance vector and the agent's actions are related by an
initially unknown visual motor
model f, satisfying
.
At any time k we want to estimate a first order model
,
valid around the current system configuration
, and
described by the ``image''[12] or visual-motor Jacobian
defined as
(4)
The image Jacobian
not only relates visual changes to motor changes, as has been
previously exploited in visual feedback control[10], but also highly
constrains the possible visual changes to the set of possible
solutions
. Thus the Jacobian
J is also a visual model, parameterized in exactly the same degrees
of freedom our system can change in. (Typically
so
the solution set is only a small subspace of
.)
In observing the moving agent we obtain the changes in
visual appearance
corresponding to a
particular controller command
. This is essentially
a secant approximation of the derivative of f along the direction
. We want to update the Jacobian in such a way so as
to satisfy our most recent observation (secant condition):
(5)
The above condition is under determined, and a family of updating formulas, called the Broyden hierarchy, is defined as follows:
(6)
Where
are different rank 1 matrices so the rank of the correction term
is equal to the number of non-zero
We choose an unsymmetric correction term:
.
This is a rank 1 updating formula in the Broyden hierarchy.
For a set of orthogonal movements about a point
(7) is identical to
a finite difference approximation
,
in
coordinate frame rotated so that
.
Note however that our estimation accepts movements along arbitrary directions
and thus is applicable also when the agent movement cannot
be specifically and precisely tailored for the training, such as when learning
the model of a biological (human) agent, or when the real agent can only
be observed, not controlled.
Over a course of time our estimation method will generate a
piecewise
linear estimate of the visual-motor model. This is illustrated in
fig.2. The size of the mesh element on which a particular
Jacobian is used is determined using a trust region method [18].
Let
be the trajectory length (
)
between Jacobian updates, and define a model agreement
The trust region method adjusts adjust
according to
(
and
are predefined bounds):
Figure 2: On-line piecewise linear function approximation.
We describe view synthesis in the off-line and on-line case. The off-line
case is: Given a motor command sequence (
) and a
previously learned visual motor model
of an agent,
generate a movie of
the agent performing the task.
The on-line case is to generate arbitrary
simulated views, representing (reasonably small) deviations
from
the current state of the real physical agent, while that same
agent is executing some task and while
learning and refining the model used to generate the synthesized views.
In the on-line case we are currently observing, and possibly
controlling the physical agent we want to be able to simulate.
Assume observation of the visual motor system has been going on for at
least n linearly independent moves
, where
n is the number of controlled DOF's of the agent. Then through
observation of the process by the method in section 2.2 we have
an estimate of the current visual motor Jacobian
.
Consider one step in an on-line algorithm.
At time k we have current image
, appearance vector
, estimated visual motor Jacobian
, and current agent state
in motor space. Around this state the visual appearance
change caused by a hypothetical movement
can be
predicted by:
(9)
The description of a system is easily cast in a telemanipulation application, where the tele-operator controls the, but for instance long delays, or limited bandwidth between the tele-operation site and the agent prevents immediate and/or full frame rate visual feedback to the operator. Instead we use the view synthesis method to generate the immediate visual feedback, and use the real visual feedback to calibrate the model used for the view synthesis. Note that for simplicity the synthesis and estimation processes are described synchronized here, but they can be asynchronous.
After some delay d, and possibly at a lower rate than full frame
rate, the real image
arrives. From it the real measured feature vector
is extracted,
The on-line method thus estimates, and uses successive piecewise linear models
of the visual motor function, each estimated model valid around a
particular physical motor state
.
How long a delay d we can tolerate depends on the validity range for
our linear model represented in
(8),
which in turn depends
on the visual-motor model of our system, and on
the visual measures we choose.
In the off-line case we have a data set of images
and a set of
the corresponding agent control vectors
. If sampled
relatively densely ordering the set so
allow us to use the on-line method to estimate
the visual-motor model f. However, we assume more processing time
is available in the off-line case, and one of several well known
function approximation techniques can be used. We show experiments
with (cubic) spline interpolation and weighted inverse distance
interpolation. The latter is interesting in that manipulating
the weight vector
we can vary the ``smoothness'' of the
estimate, and as shown in section 3.2 trade positioning
accuracy for image sharpness.
The batch part of the off-line view synthesis is as follows:
During use desired action sequences
are supplied,
for instance by a tele-operator interacting with the ``virtual''
agent. They are made into images in a straightforward way:
Note that the task in the training data set
and the motor space description of the desired simulated actions
need not be related in any way, other than that the desired simulated
actions should be from (a subset of) the workspace explored in the
training. That is, the robot could have been doing another task
in the training stage one being simulated, and the training task
does not need to contain the poses of the simulation stage, only
reasonably close ones.