next up previous contents
Next: 6.2 Synthesizing Images Up: 6 Appearance Based View Previous: 6 Appearance Based View

6.1 Overview of the View Synthesis Method

Our method has a learning and an image sequence synthesis stage. In the learning stage the problem we solve is: Given a (time) sequence of intensity images tex2html_wrap_inline3832 , and the corresponding sequence of motorgif states tex2html_wrap_inline3674 of an articulated agent, find an approximate invertible mapping tex2html_wrap_inline4228 between image space and motor space, so that (for the training set): tex2html_wrap_inline4230 To make the problem tractable we divide tex2html_wrap_inline4228 into two parts tex2html_wrap_inline4234 , where tex2html_wrap_inline3464 is in a space of appearance vectors. We will show how to estimate (learn) the function tex2html_wrap_inline4238 from watching a moving articulated agent. For images of size tex2html_wrap_inline3780 , tex2html_wrap_inline4242 , and an agent with n freedoms ( tex2html_wrap_inline4246 ) we choose the appearance vector space tex2html_wrap_inline4248 so that a significant dimensionality reduction tex2html_wrap_inline4250 is achieved. For example in the two joint robot arm simulation in Section 6.4, tex2html_wrap_inline4252 , m = 24, n = 2.

Figure 6.1 outlines our method. During training, sequences of intensity images are transformed into sequences of appearance vectors by either a disparity or a subspace method described in Section 4.1. The appearance vectors and the corresponding motor vectors are used to estimate the visual-motor model tex2html_wrap_inline4238 either on-line while the training images are acquired (section 4.2) or off-line (section 6.2.2) from a batch set of images.

   figure1064
Figure 6.1: Overview of appearance based view synthesis.



Martin Jägersand