We have presented an approach to image based view synthesis not requiring any a-priori models. Some highlights are: (1) The synthesis is of an articulated (robot or human) agent, and synthesized movements are programmed in the agent's motor space, just like a real robot, rather than a visual pose space. (2) The necessary model estimation is done automatically on-line by just watching the scene and the agent. The combination of both visual and motor space training data is the key to the quick and general model estimation.
In doing this we have traded viewing quality for simplicity of use, generality and speed of model acquisition. We require no a-priori assumptions on the scene, agents or viewing geometry, nor any knowledge of the kinematic model of the agent we simulate. The method is useful in applications where real visual feedback is delayed or too sparse. The sparse real feedback is used to estimate a visual-motor model, and the estimated model is used to generate dense, immediate visual feedback. It can also be used off-line to synthesize image sequences from a previously learned agent model.
A natural question is: Can this kind of view synthesis and animation of an articulated agent be done by a table lookup using just the stored sequence of training images? We do not think direct lookup is a practical approach. As mentioned in Section 6.5 a human observer is very sensitive to errors in visual motion. For example using only the 110 training images for our 2D robot simulator as reference images and displaying the closest one would give very uneven visual motion and an unacceptable simulation. In 2D one could sample the reference views more densely, but denser sampling does not scale well for articulated agents with 3 or more freedoms. Correct image interpolation is necessary to make the problem of generating smooth movement tractable.
The system is currently limited by the performance of the visual front end, in which the raw intensity image is turned into a parameterized visual representation. We can currently simulate articulated agents (human and robotic) in up to 3 DOF using an ``eigen-image'' subspace approach to visual representation. Given some other visual representation the visual-motor model estimation, as formulated in this paper, is capable of learning in higher DOF spaces. To find a better visual front-end we plan to experiment with block motion based strategies such as those used for image compression.