Our method has a learning and an image sequence synthesis stage.
In the learning stage the problem we solve is: Given a (time) sequence of
intensity images
, and the corresponding sequence of
motor
states
of an articulated agent, find an approximate
invertible mapping
between
image space and motor space, so that (for the training set):
To make the problem tractable we divide
into two parts
, where
is in
a space of appearance vectors. We will show how to estimate
(learn) the function
from watching a moving articulated agent.
For images of size
,
, and an agent
with n freedoms (
) we choose the appearance vector
space
so that a significant dimensionality reduction
is achieved. For example in the two joint robot arm simulation in Section
6.4,
, m = 24, n = 2.
Figure 6.1 outlines our method. During training,
sequences of intensity images are transformed into sequences of
appearance vectors by either a disparity or a subspace method
described in Section 4.1. The appearance vectors and
the corresponding motor vectors are used to estimate the visual-motor
model
either
on-line while the training images are acquired (section
4.2) or off-line
(section 6.2.2) from a batch set of images.
Figure 6.1: Overview of appearance based view synthesis.