The two visual measures we have tested for the visual front end
are very different. The ``eigen-image'' subspace method is just
a linear projection from image space
, but has a much
more complex visual-motor function f, between appearance space
and motor control space
. The
disparity, or visual motion measure is more complex, fragile and time
consuming to compute, but has a near linear f over most of the
workspace of a revolute link arm.
When using only
very sparse training data, the differences between
the two visual measures
cause the view synthesis to degrade in different ways.
We compare how the two methods perform at the limit in Figures 6.2 and 6.3. Figure 6.2 shows how the synthesis breaks down when synthesizing new views by image extrapolation using the disparity based measure. Between the images there is a 6 and 12 degree difference in commanded joint 1 (axis perpendicular to the floor) position of the simulated PUMA robot. As can be seen performance does not degrade gracefully when attempting excessively large extrapolations, and in this case the weakness is in the disparity estimation (g), rather than in the visual-motor model estimation (f).
Figure 6.2: Synthesized views of joint 1 rotation based on a disparity image
measurement. Left Small extrapolation, well within extrapolating
range. Middle: Borderline (6 degree) extrapolation, where errors
become evident. Right: Excessive extrapolation, disparity
algorithm breaks.
In Fig. 6.3 we do the same experiment with the eigen-image representation. We use relatively few eigen-images (m=24), and this causes a significant loss in quality from the start. In this sequence the blurriness is increased in all the images because none of the synthesized poses is close to a pose in the training set (of 110 images) which was used to compute the eigen-images. Note also how the actual motion between the frames is smaller than in fig. 6.2. This increase in ``blurriness'' is characteristic of how the eigen-image approach degrades in response to insufficient training data, number of eigen-images used for reconstruction, or to movements larger than the validity (trust) region of the visual-motor function approximation.
Figure 6.3: Synthesis based on an eigen-image representation.
In the later experiments we use the system with up to a 6 degree joint space extra- and interpolation range. The limit of interpolation/extrapolation is reached first for joint 1, since it causes proportionally larger image movements for a given joint movement than the joints later in the serial kinematic chain do. At the present time we favor the eigen-image measure. Despite its lack of ``sharpness'' it solves the task of giving visual feedback understandable to a human. Note that the synthesized images appear less blurry when shown in a real time movie sequence compared to static images on paper.