next up previous
Next: 4 Model Estimation Up: Estimation of Visual-Motor Models Previous: 2 Viewing model

3 Visual Measures

The purpose of the visual measures is to transform the intensity images into a more compact and descriptive space, while still capturing the pose of the object.

3.1 Feature Based Measures

We use real time visual feature trackers of three different kinds to obtain visual information. The Oxford snakes [4] are used to track surface discontinuities. A locally developed template matching tracker tracks multiple local features from surface markings or corners, and for reliability in repeated experiments, or to deal with smooth featureless surfaces, such as the lightbulb in fig. 6, we use special purpose trackers, tracking attached targets or small lights. To improve tracking, viewing geometry models are widely used. For instance the Oxford snake package uses an affine model to constrain the motion of the spline control points to rigid 3 D deformations, and an strain energy model for nonrigid image plane deformations. The convolution trackers uses point velocity for prediction.

The image Jacobian provides a new model for tracking. As noted in section 2, the subspace of of possible solutions tex2html_wrap_inline782 to tex2html_wrap_inline772 is of size tex2html_wrap_inline786 rather than tex2html_wrap_inline788 (and tex2html_wrap_inline740 ). In our active framework the agent also knows along which direction tex2html_wrap_inline792 the system changes. This leaves only a one dimensional search space along tex2html_wrap_inline794 in feature space. Note however that we cannot simply constrain the tracker output to this space. That would take away the innovation term in our model updating, and the system would no longer adapt its model to a changing environment. Instead we use tex2html_wrap_inline796 to detect outliers (e.g. stemming from occluded features or the tracker tracking the wrong thing) and to constrain the tracking search window to a small ``cylinder'' around tex2html_wrap_inline796 . In a future development we intend to use the predictor in a more general Kalman filter.

3.2 Image Intensity Based Filters

  The idea in the subspace eigen image method is to project the raw intensity values onto a basis of m eigen images. Representations based on this idea have been used for recognition problem ``what'', and for indexing locations ``where''[20, 22, 21]. There are several ways to choose the eigen images. In our case we will be looking at the same agent, in different poses, and all the images we want to represent are fairly similar. In this case it is advantageous to use a basis specifically designed for the agent. In summary (see also [20, 22]) this can be done by acquiring a (large) number p of size tex2html_wrap_inline804 images tex2html_wrap_inline806 of the agent in different poses. Let the mean image tex2html_wrap_inline808 , and for each image in the data set form the difference image tex2html_wrap_inline810 . Form a measurement matrix tex2html_wrap_inline812 , and calculate the covariance matrix tex2html_wrap_inline814 . The principal components of this data are the eigenvectors to the matrix C. The eigenvectors form an orthogonal basis for the original image set, accounting for the variation in the data in decreasing order, according to the corresponding eigenvalues. A dimensionality reduction is achieved by using instead of all tex2html_wrap_inline818 eigenvectors only a subspace of say the first tex2html_wrap_inline820 eigenvectors. For practical reasons usually tex2html_wrap_inline822 , and the covariance matrix C will be rank deficient. We can then save computational effort by instead computing tex2html_wrap_inline824 and using the p eigenvectors tex2html_wrap_inline828 of L to form the m first eigenvectors tex2html_wrap_inline834 of A by tex2html_wrap_inline838 , where tex2html_wrap_inline840 .

After a basis has been acquired, (which for a particular agent typically only needs to be done once), any new image tex2html_wrap_inline842 can be represented in this basis as a perception vector tex2html_wrap_inline844 and a given tex2html_wrap_inline846 can be transformed (with some quality loss) into a corresponding image by the inverse formula tex2html_wrap_inline848 .


next up previous
Next: 4 Model Estimation Up: Estimation of Visual-Motor Models Previous: 2 Viewing model

jag@cs.rochester.edu