next up previous contents
Next: 6.1 Overview of the Up: On-line Estimation of Visual-Motor Previous: 5.10 Experiments with the

6 Appearance Based View Synthesis and Animation

  View synthesis is the problem of generating images of a scene corresponding to the values of a set of pose parameters. The common techniques in computer graphics use 3-D CAD models of the scene and objects in the scene to render an image corresponding to a particular viewpoint and/or pose of an object. The pose parameters are the freedoms built into those 3-D CAD models. Realistic images can be obtained by texture mapping intensity data onto the 3-D CAD wire frame. Creating the models needed for this kind of view synthesis is, however, a tedious, time consuming, and often manual task. Recently some results have been published on image-based view synthesis, not requiring a-priori models or calibration, e.g. [Werner et al., 1995, Seitz and Dyer, 1995, Laveau and Faugeras, 1994]. Basically these methods perform image interpolation, and in some cases extrapolation between reference views of a scene taken from different viewpoints. Viewing geometry models (e.g. affine [Seitz and Dyer, 1995], and projective [Laveau and Faugeras, 1994]), and visibility constraints are used to ensure the rendering of physically possible scenes.

Our objective is to perform image based view synthesis, or simulation, not only rendering views from different viewpoints, but also visually animating the actions of an active articulated agent (e.g. robot or human). The method we present takes a list of actions, programmed in their natural motor frames (often the physical joint frame), and renders the images of the corresponding movement into a movie segment. The acquisition of the visual-motor models can be done on-line. This allows us, at any point in time during a physical execution with the real agents, to perform simulation with the currently learned model. We do not restrict our simulation to any particular class of agents, and can synthesize image sequences involving rigid object movements, and multiple rigid movements (such as in a serial link arm), as well as the appearance changes of a non-rigid agent.

The goals of our work are perhaps closest to those of Beymer, Shashua and Poggio in [Beymer et al., 1993]. They synthesize (and analyze) different facial expressions, based on a learned model representing a mapping between image feature movements and expression (pose) parameters, such as degree of smile and view point rotation. The main differences between Beymer et al's work and ours are: (1) The underlying mechanism of learning is different. They use regularization networks, and we use visual-motor model estimation based on techniques from numerical optimization. (2) We can simulate the actions of an arbitrary agent and do not assume knowledge of any particular features on the simulated agent, or require an operator to pick out special features. Beymer et al base their view transformation on a few a-priori selected features. (3) We can do the estimation required for the model-free approach both on-line and off-line. (4) We use a relatively dense representation of the image, and can deal with complex changes in appearancegif.

Our work draws from previous experience in two fields of computer vision. We combine a visual front end, based on model free image representation methods, with a visual-motor estimation or ``learning'' method used in optimization[Fletcher, 1987] and more recently in uncalibrated visual servoing (Chapter 5 and [Jägersand and Nelson, 1994, Jägersand and Nelson, 1995]). We have experimented with two kinds of visual front ends. The first is the ``subspace''[Nayar et al., 1995] or ``eigen-image''[Turk and Pentland, 1991] approach of reducing the dimensionality of the visual input by projection onto a different basis. In the second we use a disparity (``stereo'') algorithm, to measure image motion. The purpose of the visual-motor model estimation is to learn how control commands to the real (robot or human) agent change the visual appearance, where the visual appearance changes are represented as either changes in the eigen-image subspace, or image motion. The synthesis of new images based on agent motor commands is done by first using the visual-motor model estimate to generate a change in our visual representation. The particular representations we use are bidirectional so that intensity images can both be generated from the representation, in the synthesis stage, and described in what we call appearance vector-space, in the analysis and learning stage.

We show the view synthesis system on two example agents: a human arm and a PUMA robot arm. The system can however quite easily be trained to visually simulate any articulated agent. We propose that our system can be used, for instance, for the following tasks:




next up previous contents
Next: 6.1 Overview of the Up: On-line Estimation of Visual-Motor Previous: 5.10 Experiments with the

Martin Jägersand