In this experiment we studied how accuracy is affected by adding redundant visual information, making the system more or less over-determined. The value of redundant information has long been recognized in 3D computer vision. Viewing geometry models and full or partial camera calibration are used to relate redundant feature measurements to improve accuracy in pose estimation, e.g. see [Shapiro et al., 1994]. However, our vision system does not make any viewing geometry assumptions, and our cameras are uncalibrated. But as explained in Chapter 4, the visual-motor manipulation model gives a new way of constraining the DOF's of the visual feature vector to the smaller robot action space.
We found two main advantages. The first is that errors
due to inaccuracies in either visual feature tracking or goal
generation can be lessened. In the best case, if we add m signals
with zero mean and
variance noise, (admittedly not typically a
valid assumption in a vision system), the variance, and thus the noise
of the final signal decrease as
. However, even
if this model of the signal does not hold, as long as at least some of
the visual errors are uncorrelated we will get an improvement from
perceiving redundant visual features. In general, constraining the
robot movement to the smaller n-dimensional action space effectively
implements averaging along the directions in which we have
redundant measurements. The second main advantage is that in the
event of one or more of the features becoming occluded, or giving a
badly conditioned measure, we can rely on the others to solve the
task.
We can even add more features dynamically, first letting the
system identify their motor visual model, without using
them for control, then putting them into the control loop.
Figure 5.8: Left: 250 runs varying both error in the visual goal
specification and the number of visual features used, shows that,
particularly for imprecise visual specifications, having redundant
features helps. Right: A similar experiment, with errors from imprecise
tracking caused by placing the cameras very far away, shows a similar result.
Figure 5.8 shows the effects of increasing the number of tracked points. In the first experiment we placed the cameras 0.5m from the robot, allowing accurate feature tracking. We introduced varying amounts of error in the visual goal specification by adding a vector in a random direction, with a magnitude uniformly distributed in the range 0.0 to 1.5, and 0.0 to 3.0 pixels (expected value 0.75 and 1.5 pixels) to each of the visual goals. Final positioning accuracy was measured with the dial meter as in the repeatability experiment. The results indicate that for inaccurate visual goals, positioning improves significantly when using redundant visual information.
In the second experiment (Fig. 5.8 right) we placed the cameras 6m from the robot, thus decreasing the accuracy in tracking. Here it is more practical to measure errors as differences in joint encoder values. We observe improvement in joint space end point accuracy when increasing the number of features.