next up previous
Next: 4 Discussion Up: Model Free View Synthesis Previous: 2 Theory

3 Experiments

We have done an experimental evaluation of the view synthesis method, learning a model for the visual appearance changes of a PUMA 760 robot when controlled from either a joint or world coordinate space, and of a human arm. First we compare the pros and cons of the two visual front ends we described. In the next section we discuss viewing quality and positioning accuracy tradeoffs made in the estimation of the visual-motor model tex2html_wrap_inline807 . In sect. 3.4 we describe a visual PUMA robot simulator we have constructed from the components. The simulator we describe can be run without a robot, and is available by ftp. Last we evaluate how the method scales when we increase the controlled DOF's of the system, or want better image quality from the synthesis.

3.1 Comparison: subspace and disparity based measures

  The two visual measures we have tested for the visual front end are very different. The ``eigen image'' subspace method is just a linear projection from image space tex2html_wrap_inline809 , but has a much more complex visual-motor function f, between appearance space tex2html_wrap_inline813 and motor control space tex2html_wrap_inline815 . The disparity, or visual motion measure is more complex, fragile and time consuming to compute, but has a near linear f over most of the workspace of a revolute link arm. When using only very sparse training data, the differences between the two visual measures cause the view synthesis to degrade in different ways.

We compare how the two methods perform at the limit in fig. 3 and 4. Fig. 3 shows how the synthesis breaks down when synthesizing new views by image extrapolation using the disparity based measure. Between the images there is a 6 and 12 degree difference in commanded joint 1 (axis perpendicular to the floor) position of the simulated PUMA robot. As can be seen performance does not degrade gracefully when attempting too large extrapolations, and in this case the weakness is in the disparity estimation (g), rather than in the visual-motor model estimation (f).

   figure370
Figure 3: Synthesized views of joint 1 rotation based on a disparity image measurement. Left Small extrapolation, well within extrapolating range. Middle: Borderline (6 degree) extrapolation, where errors become evident. Right: Too large extrapolation, disparity algorithm breaks.

In fig. 4 we do the same experiment with the eigen image representation. We use relatively few eigen images (m=24), and this causes a significant loss in quality from the start. In this sequence the blurriness is increased in all the images because none of the synthesized poses is close to a pose in the training set (of 110 images) which was used to compute the eigen images. Note also how the actual motion between the frames is smaller than in fig. 3. This increase in ``blurriness'' is characteristic for how the eigen image approach degrades in response to insufficient training data, number of eigen images used for reconstruction or to movements larger than the validity (trust) region of the visual motor function approximation.

   figure379
Figure 4: Synthesis based on an eigen image representation.

In the later experiments we use the system with up to 6 degree joint space extra- and interpolation range. The limit of interpolation/extrapolation is reached first for joint 1, since it causes proportionally larger image movements to a given joint movement than the later joints in the serial kinematic chain. At the present time we favor the eigen image measure. Despite its lack of ``sharpness'' it solves the task of giving visual feedback understandable to a human. Note that the synthesized images appear less blurry when shown in a real time movie sequence compared to static images on paper.

3.2 Evaluating the estimated visual-motor model of a human arm

 

   figure386
Figure: Simulation of a human arm. Video 1

Human arm data was gathered by repeatedly ``waving'' the arm in front of a cameragif. For a human we don't have access to the (neural) motor control signals, so the movement was parameterized with respect to the camera space position of a black marker on the hand. This allow us to parameterize human arm movement in the image plane during training, and later synthesize (animate) arbitrary arm movements in the image plane. Figure 5 shows image nr. tex2html_wrap_inline829 from the movie sequence Video 1. In the movie notice that our simulation technique captures not only the two different motions of the upper and fore arm, but also regenerates the correlated non-rigid deformation (wrinkling) of the shirt. However, the small head movements were not correlated with the marker position, and thus show up as random motions (noise) in the synthesis stage.

To study the model approximation quality we varied the weight vector tex2html_wrap_inline787 in (10). As a baseline we use the estimation with tex2html_wrap_inline833 (corresponding to linear, quadratic and cubic terms in (10)). Including only the linear term ( tex2html_wrap_inline835 ), yields a smoother varying approximation tex2html_wrap_inline807 , and makes the displayed images slightly more blurry, but the desired arm positions are reproduced accurately. Figure 6 illustrates this by showing the synthesis of the same arm pose with three different tex2html_wrap_inline787 . Artificial ``sharpening'' of the image can be achieved by suppressing the linear term, but at the expense of position accuracy. In the right frame of fig. 6 notice how the fingers of the hand are geometrically distorted. Also when comparing the three movies Video 1, Video 2 and Video 2 corresponding to the three different tex2html_wrap_inline787 in fig. 6 one can see that while tex2html_wrap_inline833 and tex2html_wrap_inline835 reproduces straight line trajectories for the hand, tex2html_wrap_inline876 has a small but noticeable motion jitter from inaccurate positioning.

   figure418
Figure 6: Reproduction quality with different visual-motor model estimation. Left: Linear through cubic terms, middle: only linear, right: mostly quadratic and cubic terms.

In previously published research [3] on linear subspace methods for recognition eigen-images U in (1) look like the objects they represent (hence the terms ``eigen objects'' and ``eigen faces'' in face recognition). In our application we apply the subspace methods in a different way, not representing different objects or faces in the same location, but representing the same object (human or robot arm) during different motions. The resulting eigenvectors do not look like human or robot arms, but look like motion filters, with maximum gradient along the most common normal (edge) motion direction. Figure 6 shows the six most significant eigenvectors.

   figure415
Figure 7: Six most significant filters (eigen images) learned for the human arm movement.

3.3 On-line linear view synthesis

  For the on-line view synthesis we use a piecewise (time varying) linear model, as described in section 2.3.1, valid around the current system configuration. The model is updated as we move around in the robot parameter space. The main reason for a linear model is that we want to be able to use it for the image synthesis (and possibly control) using as little calibration information as possible, and the linear model has fewer parameters to estimate than a higher order model, which means that a current estimate can be obtained fast. Fig. 8 shows the online case. The real robot is physically in the configuration shown in the lower middle image, and the view synthesis method is used to generate the small deviations from this state shown in the surrounding images.

   figure424
Figure 8: Using the linear model to synthesize a few small deviations ``twiddles'' from the real physical state in the bottom center image. Video

3.4 An off-line PUMA 760 robot simulator

We have developed a visual simulator for robot arms, and identified the visual-motor model of our PUMA 760 robot arm. The simulator combines with Peter Corke's robotics toolbox to form a dynamic and kinematic visual simulation system for robot arms. Our simulator, along with the PUMA 760 model is available by ftpgif.

The simulator is based on the eigen image visual front end and the off-line synthesis method. Robot movements are commanded in joint space tex2html_wrap_inline815 , and using the toolbox also in world or tool space. The simulator plans robot trajectories, velocities and accelerations. With the toolbox the user can also calculate and plot the required forces from the motors. To make the distribution compact, a 2 DOF robot model is supplied, which is based on only 24 eigen images.

The user can run the simulator to test and visually see the execution of robot programs and control algorithms, while plotting instantaneous joint accelerations, velocities and positions. The execution of a motion command can be seen in fig. 9 and the visual result in fig. 10 and Video.

 

   figure436
Figure 9: Instantaneous robot parameters simulated for a move of a PUMA 760

 

  figure455


Figure: 5 out of 30 synthesized views generated from the execution of the same command that generated fig. 6. Video

3 DOF world coordinate driven synthesis

What part of the view synthesis system limits how complex an agent we can simulate? There are two main aspects to scaling the system: How it affects the quality of the synthesized views, and how the need for computational resources scales. In our system the limiting factor in scaling it to high DOF agents is the visual front end. We have in previous work successfully done the visual-motor transfer function estimation in up to 12 DOF even with non-rigidly changing agents [11], and used that estimate for control of the agent, which typically is a harder problem than the view synthesis. (In view synthesis we get a degradation in quality of the synthesized images if the visual motor model is bad, while in control total failure is possible due to oscillatory or divergent behavior of the visual feedback controller.)

With the eigen image visual front end we have done the PUMA robot simulation in up to 3 DOF. For the 3 DOF case we used 1000 training images to form the basis space, and we need to keep a larger number of eigen images tex2html_wrap_inline861 than in the previous 2 DOF simulation, to get a reasonable simulation quality. Fig. 11 shows the resulting reproduction quality using a variable number of eigen images from the 3-D training set.

   figure450
Figure 11: Image quality resulting from using between 9 and 299 eigen images to represent a single robot pose in the training set

For p images in the training set, of size tex2html_wrap_inline555 , using m eigen images, and for a n DOF agent, the time complexity of the eigen image calculation is tex2html_wrap_inline871 , where tex2html_wrap_inline873 is usually the dominating term. (remember that typically tex2html_wrap_inline875 ) The online learning of the visual motor function takes O(nm) time per step, and O(n) steps are needed for a full identification in a particular point. The synthesis step takes tex2html_wrap_inline881 time. Of these we have found that in practice the eigen image calculation is by far the most time consuming part. Up to now we have found it feasible to do the robot simulation in up to 3 DOF's. The initial getting of the eigen images for the particular agent/robot takes about an hour on a single processor SUN Sparc, and both the visual motor model learning and the synthesis parts run at a few frames per second on the same machine.

Fig 12 shows the simulated image quality in the 3 DOF case. In this case we parameterized the eigen image space in robot world coordinates instead of joint coordinates. This makes the sampling of visual poses more even in the training stage, as a world movement command typically moves all joints in the robot, while operation in joint space often uses movements of a single joint, and the result is a better synthesis quality than the previously shown joint frame simulations.

 

  figure471


Figure 12: Simulation of an articulated PUMA robot here controlled in 3 DOF world space Video


next up previous
Next: 4 Discussion Up: Model Free View Synthesis Previous: 2 Theory

Martin Jägersand