next up previous
Next: 3 Experiments Up: Model Free View Synthesis Previous: 1 Introduction

2 Theory

 

Our method has a learning and an image sequence synthesis stage. In the learning stage the problem we solve is: Given a (time) sequence of intensity images tex2html_wrap_inline539 and the corresponding sequence of motorgif states tex2html_wrap_inline541 of an articulated agent, find an approximate invertible mapping tex2html_wrap_inline543 between image space and motor space, so that (for the training set): tex2html_wrap_inline545 To make the problem tractable we divide tex2html_wrap_inline543 into two parts tex2html_wrap_inline549 , where tex2html_wrap_inline551 is in a space of appearance vectors. We will show how to estimate (learn) the function tex2html_wrap_inline553 from watching a moving articulated agent. For images of size tex2html_wrap_inline555 , tex2html_wrap_inline557 , and an agent with n freedoms ( tex2html_wrap_inline561 ) we choose the appearance vector space tex2html_wrap_inline563 so that a significant dimensionality reduction tex2html_wrap_inline565 is achieved. For example in the two joint robot arm simulation in section 3.1, tex2html_wrap_inline567 , m = 24, n = 2.

Figure 1 outlines our method. During training, sequences of intensity images are transformed into sequences of appearance vectors by either a disparity or a subspace method described in the next section. The appearance vectors and the corresponding motor vectors are used to estimate the visual-motor model tex2html_wrap_inline553 either on-line while the training images are acquired (section 2.2) or off-line (section 2.3.2) from a batch set of images.

   figure53
Figure 1: Overview of appearance based view synthesis.

2.1 Appearance based representations

  In this section we briefly review the principles of the two visual front ends we have experimented with. We require visual representations to be: (1) Approximately invertible so reasonable quality images can be reconstructed. (2) Well behaved (smooth) w.r.t. motor space so that the visual motor model can be estimated. We also prefer representations not restricted to a particular class of agents, or requiring a-priori models.

2.1.1 Subspace eigen-image methods

The idea in the subspace eigen image method is to project the raw intensity values onto a basis of m eigen images. Representations based on this idea have been used for recognition problem ``what''[1, 3] and location determination problem ``where''[2]. There are several ways to choose the eigen images. In our case we will be looking at the same agent, in different poses, and all the images we want to represent are fairly similar. In this case it is advantageous to use a basis specifically designed for the agent. In summary (see also [1, 3]) this can be done by acquiring a (large) number p of size tex2html_wrap_inline555 images tex2html_wrap_inline583 of the agent in different poses. Let the mean image tex2html_wrap_inline585 , and for each image in the data set form the difference image tex2html_wrap_inline587 . Form a measurement matrix tex2html_wrap_inline589 , and calculate the covariance matrix tex2html_wrap_inline591 . The principal components of this data are the eigenvectors to the matrix C. The eigenvectors form an orthogonal basis for the original image set, accounting for the variation in the data in decreasing order, according to the corresponding eigenvalues. A dimensionality reduction is achieved by using instead of all tex2html_wrap_inline595 eigenvectors only a subspace of say the first tex2html_wrap_inline597 eigenvectors. For practical reasons usually tex2html_wrap_inline599 , and the covariance matrix C will be rank deficient. We can then save computational effort by instead computing tex2html_wrap_inline601 and using the p eigenvectors tex2html_wrap_inline605 of L to form the m first eigenvectors tex2html_wrap_inline611 of A by tex2html_wrap_inline615 , where tex2html_wrap_inline617 .

After a basis has been acquired (which for a particular agent typically only needs to be done once), any new image tex2html_wrap_inline619 can be represented in this basis as an appearance vector

(1)  displaymath622

and a given tex2html_wrap_inline551 can be transformed (with some quality loss) into a corresponding image by the inverse formula

(2)  displaymath624

2.1.2 Disparity based methods

Previous image based view synthesis methods have often been based on dense disparity maps. The objective has been to synthesize different viewpoints on a 1D viewing circle, or possibly a 2D viewing sphere [5, 6, 7]. An advantage of using disparity is that considerable research effort has been spent on developing good ``stereo'' vision algorithms, in which finding the disparity between the images in the two cameras is usually a crucial step. We base our method on an algorithm by Cox et al [4], which uses a Bayesian ML approach to do an image intensity based matching of image features searching along the epipolar lines in two images.

We cannot directly apply these disparity based methods. While in previous view synthesis work, image motion caused by a change in viewpoint is along an epipolar line, we study a multi DOF articulated agent, where image motion can occur in many directions. We have instead tried an approximate method, based on image rectification[6], aligning the scan lines in the rectified images with the major direction of motion. Disparity is then measured along this line only with the 1D disparity algorithm. This approach works relatively well for simulating our PUMA robot, where the rigid links are attached to each other, and often the major motion is caused by the two joints in the base, and the other joint motions can be seen as small corrections to this big motion.

Given the disparity map tex2html_wrap_inline627 represented in a column vector, between the view tex2html_wrap_inline629 at time k-1 and the view tex2html_wrap_inline539 at time k, for the 1D case we experiment with, the appearance vector tex2html_wrap_inline551 is chosen so that tex2html_wrap_inline639 . This determines the set of tex2html_wrap_inline641 up to a constant vector. The constant vector is set with reference to a special view, which for the on-line case is the current view, and for the off-line case is a small set of reference views. The inverse problem of getting the image back from the appearance vector, assuming no occlusion, is solvable given reference image tex2html_wrap_inline539 (in the on-line simulation case discussed later, this is just the current rectified real image) and a desired change in visual appearance tex2html_wrap_inline645 as

(3)  displaymath650

where the vectors are indexed in the corresponding 2-D image coordinate locations (i,j). The synthesized image tex2html_wrap_inline649 is then ``unrectified'' into the original camera frame, and can be displayed.

2.2 On-Line visual-motor model estimation

  The visual appearance vector and the agent's actions are related by an initially unknown visual motor model f, satisfying tex2html_wrap_inline655 . At any time k we want to estimate a first order model tex2html_wrap_inline659 , valid around the current system configuration tex2html_wrap_inline541 , and described by the ``image''[12] or visual-motor Jacobian defined as

(4) displaymath716

The image Jacobian not only relates visual changes to motor changes, as has been previously exploited in visual feedback control[10], but also highly constrains the possible visual changes to the set of possible solutions tex2html_wrap_inline663 . Thus the Jacobian J is also a visual model, parameterized in exactly the same degrees of freedom our system can change in. (Typically tex2html_wrap_inline667 so the solution set is only a small subspace of tex2html_wrap_inline669 .)

In observing the moving agent we obtain the changes in visual appearance tex2html_wrap_inline671 corresponding to a particular controller command tex2html_wrap_inline673 . This is essentially a secant approximation of the derivative of f along the direction tex2html_wrap_inline673 . We want to update the Jacobian in such a way so as to satisfy our most recent observation (secant condition):

(5) displaymath718

The above condition is under determined, and a family of updating formulas, called the Broyden hierarchy, is defined as follows:

(6) displaymath720

Where tex2html_wrap_inline679 are different rank 1 matrices so the rank of the correction term is equal to the number of non-zero tex2html_wrap_inline681 We choose an unsymmetric correction term:gif.

(7)  displaymath722

This is a rank 1 updating formula in the Broyden hierarchy. For a set of orthogonal movements about a point tex2html_wrap_inline687 (7) is identical to a finite difference approximation tex2html_wrap_inline689 , tex2html_wrap_inline691 in coordinate frame rotated so that tex2html_wrap_inline693 . Note however that our estimation accepts movements along arbitrary directions tex2html_wrap_inline673 and thus is applicable also when the agent movement cannot be specifically and precisely tailored for the training, such as when learning the model of a biological (human) agent, or when the real agent can only be observed, not controlled.

Over a course of time our estimation method will generate a piecewise linear estimate of the visual-motor model. This is illustrated in fig.2. The size of the mesh element on which a particular Jacobian is used is determined using a trust region method [18]. Let tex2html_wrap_inline697 be the trajectory length ( tex2html_wrap_inline699 ) between Jacobian updates, and define a model agreement tex2html_wrap_inline701 The trust region method adjusts adjust tex2html_wrap_inline697 according to ( tex2html_wrap_inline705 and tex2html_wrap_inline707 are predefined bounds):

(8)  displaymath724

   figure229
Figure 2: On-line piecewise linear function approximation.

2.3 View synthesis

We describe view synthesis in the off-line and on-line case. The off-line case is: Given a motor command sequence ( tex2html_wrap_inline727 ) and a previously learned visual motor model tex2html_wrap_inline729 of an agent, generate a movie of the agent performing the task. The on-line case is to generate arbitrary simulated views, representing (reasonably small) deviations tex2html_wrap_inline673 from the current state of the real physical agent, while that same agent is executing some task and while learning and refining the model used to generate the synthesized views.

2.3.1 On-line view synthesis

  In the on-line case we are currently observing, and possibly controlling the physical agent we want to be able to simulate. Assume observation of the visual motor system has been going on for at least n linearly independent moves tex2html_wrap_inline673 , where n is the number of controlled DOF's of the agent. Then through observation of the process by the method in section 2.2 we have an estimate of the current visual motor Jacobian tex2html_wrap_inline685 . Consider one step in an on-line algorithm. At time k we have current image tex2html_wrap_inline539 , appearance vector tex2html_wrap_inline745 , estimated visual motor Jacobian tex2html_wrap_inline747 , and current agent state tex2html_wrap_inline541 in motor space. Around this state the visual appearance change caused by a hypothetical movement tex2html_wrap_inline673 can be predicted by:

(9) displaymath776

The description of a system is easily cast in a telemanipulation application, where the tele-operator controls the, but for instance long delays, or limited bandwidth between the tele-operation site and the agent prevents immediate and/or full frame rate visual feedback to the operator. Instead we use the view synthesis method to generate the immediate visual feedback, and use the real visual feedback to calibrate the model used for the view synthesis. Note that for simplicity the synthesis and estimation processes are described synchronized here, but they can be asynchronous.

tabular253

tabular267

After some delay d, and possibly at a lower rate than full frame rate, the real image arrives. From it the real measured feature vector tex2html_wrap_inline769 is extracted, The on-line method thus estimates, and uses successive piecewise linear models of the visual motor function, each estimated model valid around a particular physical motor state tex2html_wrap_inline541 . How long a delay d we can tolerate depends on the validity range for our linear model represented in tex2html_wrap_inline775 (8), which in turn depends on the visual-motor model of our system, and on the visual measures we choose.

2.3.2 Off-line view synthesis

  In the off-line case we have a data set of images tex2html_wrap_inline779 and a set of the corresponding agent control vectors tex2html_wrap_inline781 . If sampled relatively densely ordering the set so tex2html_wrap_inline783 allow us to use the on-line method to estimate the visual-motor model f. However, we assume more processing time is available in the off-line case, and one of several well known function approximation techniques can be used. We show experiments with (cubic) spline interpolation and weighted inverse distance interpolation. The latter is interesting in that manipulating the weight vector tex2html_wrap_inline787 we can vary the ``smoothness'' of the estimate, and as shown in section 3.2 trade positioning accuracy for image sharpness.

(10)  displaymath804

The batch part of the off-line view synthesis is as follows:

centering317

During use desired action sequences tex2html_wrap_inline795 are supplied, for instance by a tele-operator interacting with the ``virtual'' agent. They are made into images in a straightforward way:

tabular326

Note that the task in the training data set tex2html_wrap_inline781 and the motor space description of the desired simulated actions tex2html_wrap_inline795 need not be related in any way, other than that the desired simulated actions should be from (a subset of) the workspace explored in the training. That is, the robot could have been doing another task in the training stage one being simulated, and the training task does not need to contain the poses of the simulation stage, only reasonably close ones.


next up previous
Next: 3 Experiments Up: Model Free View Synthesis Previous: 1 Introduction

Martin Jägersand