View synthesis is the problem of generating images of a scene corresponding to the values of a set of pose parameters. The common techniques in computer graphics use 3-D CAD models of the scene and objects in the scene to render an image corresponding to a particular viewpoint and/or pose of an object. The pose parameters are the freedoms built into those 3-D CAD models. Realistic images can be obtained by texture mapping intensity data onto the 3-D CAD wire frame. Creating the models needed for this kind of view synthesis is however a tedious, time consuming, and often manual task. Recently some results have been published on image based view synthesis, not requiring a-priori models or calibration, e.g. [5, 6, 7]. Basically these methods do image interpolation, and in some cases also extrapolation between reference views of a scene taken from different viewpoints. Viewing geometry models (e.g. affine [6], and projective [7]), and visibility constraints are used to ensure the rendering of physically possible scenes.
In this work the objective is to do image based view synthesis, or simulation, not only rendering views from different viewpoints, but also visually animating the the actions of an active articulated agent (e.g. robot or human). The method we present takes a list of actions, programmed in their natural motor frames (often the physical joint frame), and renders the images of the corresponding movement into a movie segment. The acquisition of the visual-motor models can be done on-line. This allows us to, at any point in time during a physical execution with the real agents, do simulation with the currently learned model. We do not restrict our simulation to any particular class of agents, and can synthesize image sequences of rigid object movements, multiple rigid movements (such as in a serial link arm), as well as the appearance changes of a non-rigid agent.
The goals of our work are perhaps closest to those of Beymer, Shashua
and Poggio in [8]. They synthesize (and analyze) different
facial expressions, based on a learned model between image feature movements
and expression (pose) parameters, such as degree of smile and
view point rotation.
The main differences between Beymer et al's work and ours are:
(1) The underlying mechanism of learning is different. They use
regularization networks, and we use visual-motor model
estimation based on techniques from numerical optimization.
(2) We can simulate the actions of an arbitrary agent and do not
assume knowledge of any particular features on the
simulated agent, or require an operator to pick out special features.
Beymer et al base their view transformation on a few a-priori
selected features.
(3) We can do the estimation required for the model-free approach
both on-line and off-line.
(4) We use a relatively dense representation of the image, and
can deal with complex changes in appearance
.
Our work draws from previous experience in two fields of computer vision. We combine a visual front end, based on model free image representation methods, with a visual-motor estimation or ``learning'' method used in optimization[16] and more recently in uncalibrated visual servoing[11, 14]. We have experimented with two kinds of visual front ends. The first is the ``subspace''[1] or ``eigen image''[3] approach of reducing the dimensionality of the visual input by projection onto a different basis. In the second we use a disparity (``stereo'') algorithm, to measure image motion. The purpose of the visual-motor model estimation is to learn how control commands to the real (robot or human) agent change the visual appearance, where the visual appearance changes now are represented as either changes in the eigen image subspace, or image motion. The synthesis of new images based on agent motor commands is done by first using the visual-motor model estimate to generate a change in our visual representation. The particular representations we use are bidirectional so that intensity images can both be generated from the representation, in the synthesis stage, and described in what we call appearance vector-space, in the analysis/learning stage.
We show the view synthesis system on two example agents: a human arm and a PUMA robot arm. The system can however quite easily be trained to visually simulate any articulated agent. We propose that our system can be used for instance for the following tasks: