We have presented an approach to image based view synthesis not requiring any a-priori models. Some highlights are: (1) The necessary model estimation can be done quickly on line by just watching the scene and the agent. (2) The synthesis is of an articulated agent, and is controlled in the agent's motor space, rather than a visual pose space. The combination of both visual and motor space training data is the key to the quick and general model estimation.
In doing this we have traded viewing quality for simplicity of use and speed of model acquisition. We require no a-priori assumptions on the scene, agents or viewing geometry, nor any knowledge of the kinematic model of the agent we simulate. The method is useful in applications where real visual feedback is delayed and/or too sparse. The sparse real feedback is used to estimate a visual motor model, and the estimated model is used to generate dense, immediate visual feedback. It can also be used offline to synthesize image sequences from a previously learned agent model.
Can this kind of view synthesis and animation of an articulated agent be done by a table lookup using just the stored sequence of training images? No, we do not think direct lookup is a practical approach. As mentioned in section 3.2 a human observer is very sensitive to errors in visual motion. Using only the 110 training images for our 2D robot simulator as reference images and displaying the closest one would give very uneven visual motion and an unacceptable simulation. In 2D one could sample the reference views denser, but denser sampling does not scale well for articulated agents with 3 or more freedoms. Correct image interpolation is necessary to make the problem of generating smooth movement tractable.
The system is currently limited by the performance of the visual front end, in which the raw intensity image is turned into a parameterized visual representation. We can currently simulate articulated agents (human and robotic) in up to 3 DOF using an ``eigen image'' subspace approach to visual representation. Given some other visual representation the visual-motor transfer function estimation, as formulated in this paper, is capable of learning in higher DOF spaces. To find a better visual front end we plan to experiment with block motion based strategies as used for image compression.