next up previous contents
Next: 6.3 Experiments with View Up: 6 Appearance Based View Previous: 6.1 Overview of the

6.2 Synthesizing Images

We describe view synthesis in the off-line and on-line case. The off-line case is: Given a motor command sequence ( tex2html_wrap_inline4262 ) and a previously learned visual motor model tex2html_wrap_inline4264 of an agent, generate a movie of the agent performing the task. The on-line case is to generate arbitrary simulated views, representing (reasonably small) deviations tex2html_wrap_inline3542 from the current state of the real physical agent, while that same agent is executing some task and while learning and refining the model used to generate the synthesized views.

6.2.1 On-line View Synthesis

  In the on-line case we are currently observing, and possibly controlling the physical agent we want to be able to simulate. Assume observation of the visual motor system has been going on for at least n linearly independent moves tex2html_wrap_inline3542 , where n is the number of controlled DOF's of the agent. Then through observation of the process by the method in Section 4.2 we have an estimate of the current visual motor Jacobian tex2html_wrap_inline3874 . Consider one step in an on-line algorithm. At time k we have current image tex2html_wrap_inline3832 , appearance vector tex2html_wrap_inline4192 , estimated visual motor Jacobian tex2html_wrap_inline4084 , and current agent state tex2html_wrap_inline3674 in motor space. Around this state the visual appearance change caused by a hypothetical movement tex2html_wrap_inline3542 can be predicted by:

(22) displaymath4311

Consider the algorithm in table 6.2.1. The description of a system is easily cast in a telemanipulation application, where the tele-operator controls a remote robot, but long delays, or limited bandwidth between the tele-operation site and the agent may prevent immediate or full frame rate visual feedback to the operator. Instead we use the view synthesis method to generate the immediate visual feedback, and use the real visual feedback to calibrate the model used for the view synthesis. Note that for simplicity, the synthesis and estimation processes are described as synchronized here, but they can be asynchronous.

   table1098
Table 6.1: Algorithm for on-line view synthesis

After some delay d, and possibly at a lower rate than full frame rate, the real image arrives. From it the real measured feature vector tex2html_wrap_inline4304 is extracted, The on-line method thus estimates, and uses successive piecewise linear models of the visual motor function, each estimated model valid around a particular physical motor state tex2html_wrap_inline3674 . How long a delay d we can tolerate depends on the validity range for our linear model represented in tex2html_wrap_inline3988 (5.4), which in turn depends on the visual-motor model of our system, and on the visual measures we choose.

6.2.2 Off-line View Synthesis

  In the off-line case we have a data set of images tex2html_wrap_inline4314 and a set of the corresponding agent control vectors tex2html_wrap_inline3986 . If sampled relatively densely ordering the set so tex2html_wrap_inline4318 allow us to use the on-line method to estimate the visual-motor model f. However, we assume more processing time is available in the off-line case, and one of several well known function approximation techniques can be used. We show experiments with (cubic) spline interpolation and weighted inverse distance interpolation (Eq. 6.2). The latter is interesting in that by manipulating the weight vector tex2html_wrap_inline4322 we can vary the ``smoothness'' of the estimate, and as shown in Section 6.5 trade positioning accuracy for image sharpness.

(23)  displaymath4339

The batch part of the off-line view synthesis is as follows:

During use desired action sequences tex2html_wrap_inline4330 are supplied, for instance by a tele-operator interacting with the ``virtual'' agent. They are made into images in a straightforward way:

Note that the task in the training data set tex2html_wrap_inline3986 and the motor space description of the desired simulated actions tex2html_wrap_inline4330 need not be related in any way, other than that the desired simulated actions should be from (a subset of) the workspace explored in the training. That is, the robot could have been doing another task in the training stage one being simulated, and the training task does not need to contain the poses of the simulation stage, only reasonably close ones.


next up previous contents
Next: 6.3 Experiments with View Up: 6 Appearance Based View Previous: 6.1 Overview of the

Martin Jägersand