next up previous contents
Next: 7 Visual Space Task Up: 6 Appearance Based View Previous: 6.7 An Off-line PUMA

3 DOF World Coordinate Driven Synthesis

What part of the view synthesis system limits how complex an agent we can simulate? There are two main aspects to scaling the system: How it affects the quality of the synthesized views, and how the need for computational resources scales. In our system, the limiting factor in scaling it to high DOF agents is the visual front end. In previous work we have successfully carried out the visual-motor transfer function estimation in up to 12 DOF even with non-rigidly changing agents [Jägersand and Nelson, 1994], and used that estimate for control of the agent, which typically is a harder problem than the view synthesis. (In view synthesis we get a degradation in quality of the synthesized images if the visual motor model is bad, while in control total failure is possible due to oscillatory or divergent behavior of the visual feedback controller.)

With the eigen-image visual front end we have done the PUMA robot simulation in up to 3 DOF. For the 3 DOF case we used 1000 training images to form the basis space, and we need to keep a larger number of eigen-images tex2html_wrap_inline4402 than in the previous 2 DOF simulation, to get a reasonable simulation quality. Fig. 6.10 shows the resulting reproduction quality using a variable number of eigen-images from the 3-D training set.

   figure1294
Figure 6.10: Image quality resulting from using between 9 and 299 eigen-images to represent a single robot pose in the training set.

For p images in the training set, of size tex2html_wrap_inline3780 , using m eigen-images, and for a n DOF agent, the time complexity of the eigen-image calculation is tex2html_wrap_inline4412 , where tex2html_wrap_inline4414 is usually the dominating term. (remember that typically tex2html_wrap_inline4416 ) The online learning of the visual motor function takes O(nm) time per step, and O(n) steps are needed for a full identification in a particular point. The synthesis step takes tex2html_wrap_inline4422 time. Of these we have found that, in practice, the eigen-image calculation is by far the most time consuming part. Up to now we have found it feasible to do the robot simulation in up to 3 DOF's. The initial computation of the eigen-images for the particular agent (robot or human) takes about an hour on a single processor SUN Sparc, and both the visual motor model learning and the synthesis parts run at a few frames per second on the same machine.

Fig 6.11 shows the simulated image quality in the 3 DOF case. In this case we parameterized the eigen-image space in robot world coordinates instead of joint coordinates. This makes the sampling of visual poses more even in the training stage, as a world movement command typically moves all joints in the robot, while operation in joint space often uses movements of a single joint, and the result is a better synthesis quality than the previously shown joint frame simulations.

   figure1300
Figure 6.11: Simulation of an articulated PUMA robot here controlled in 3 DOF world space in a tex2html_wrap_inline3472 meter cubic workspaceVideo 6


next up previous contents
Next: 7 Visual Space Task Up: 6 Appearance Based View Previous: 6.7 An Off-line PUMA

Martin Jägersand