We describe view synthesis in the off-line and on-line case. The off-line
case is: Given a motor command sequence (
) and a
previously learned visual motor model
of an agent,
generate a movie of
the agent performing the task.
The on-line case is to generate arbitrary
simulated views, representing (reasonably small) deviations
from
the current state of the real physical agent, while that same
agent is executing some task and while
learning and refining the model used to generate the synthesized views.
In the on-line case we are currently observing, and possibly
controlling the physical agent we want to be able to simulate.
Assume observation of the visual motor system has been going on for at
least n linearly independent moves
, where
n is the number of controlled DOF's of the agent. Then through
observation of the process by the method in Section 4.2 we have
an estimate of the current visual motor Jacobian
.
Consider one step in an on-line algorithm.
At time k we have current image
, appearance vector
, estimated visual motor Jacobian
, and current agent state
in motor space. Around this state the visual appearance
change caused by a hypothetical movement
can be
predicted by:
(22)
Consider the algorithm in table 6.2.1. The description of a system is easily cast in a telemanipulation application, where the tele-operator controls a remote robot, but long delays, or limited bandwidth between the tele-operation site and the agent may prevent immediate or full frame rate visual feedback to the operator. Instead we use the view synthesis method to generate the immediate visual feedback, and use the real visual feedback to calibrate the model used for the view synthesis. Note that for simplicity, the synthesis and estimation processes are described as synchronized here, but they can be asynchronous.
Table 6.1: Algorithm for on-line view synthesis
After some delay d, and possibly at a lower rate than full frame
rate, the real image
arrives. From it the real measured feature vector
is extracted,
The on-line method thus estimates, and uses successive piecewise linear models
of the visual motor function, each estimated model valid around a
particular physical motor state
.
How long a delay d we can tolerate depends on the validity range for
our linear model represented in
(5.4),
which in turn depends
on the visual-motor model of our system, and on
the visual measures we choose.
In the off-line case we have a data set of images
and a set of
the corresponding agent control vectors
. If sampled
relatively densely ordering the set so
allow us to use the on-line method to estimate
the visual-motor model f. However, we assume more processing time
is available in the off-line case, and one of several well known
function approximation techniques can be used. We show experiments
with (cubic) spline interpolation and weighted inverse distance
interpolation (Eq. 6.2). The latter is interesting in that by manipulating
the weight vector
we can vary the ``smoothness'' of the
estimate, and as shown in Section 6.5 trade positioning
accuracy for image sharpness.
The batch part of the off-line view synthesis is as follows:
During use desired action sequences
are supplied,
for instance by a tele-operator interacting with the ``virtual''
agent. They are made into images in a straightforward way:
Note that the task in the training data set
and the motor space description of the desired simulated actions
need not be related in any way, other than that the desired simulated
actions should be from (a subset of) the workspace explored in the
training. That is, the robot could have been doing another task
in the training stage one being simulated, and the training task
does not need to contain the poses of the simulation stage, only
reasonably close ones.