In this experiment we studied how accuracy is affected by adding
redundant visual information, making the system more or less over
determined. We found two main advantages. The first is that errors
due to inaccuracies in either visual feature tracking or goal
generation can be lessened. In the best case, if we add
signals
with zero mean
variance noise, (admittedly not typically a
valid assumption in a vision system), the variance, and thus the noise
of the final signal decreases as
). However, even
if this model of the signal does not hold, as long as at least some of
the visual errors are uncorrelated we will get an improvement from
perceiving redundant visual features. In general, constraining the
robot movement to the smaller
-dimensional action space effectively
implements averaging along the directions in which we have
redundant measurements. The second main advantage is that in the
event of one or more of the features becoming occluded, or giving a
badly conditioned measure, we can rely on the others to solve the
task.
We can even add more features dynamically, first letting the
system identify their motor visual transfer function, without using
them for control, then putting them into the control loop.
Figure 10: Left: 250 runs varying error in the visual goal
specification, and the number of visual features used shows that
particularly for imprecise visual specifications having redundant
features help. Right: A similar experiment but having imprecise
tracking by placing the cameras very far away shows a similar result.
Differences in joint errors are small, but statistically significant
Figure 10 shows the effects of increasing the number of tracked points. In the fist experiment we placed the cameras 0.5m from the robot, allowing accurate feature tracking. We introduced varying amounts of error in the visual goal specification by adding a vector in a random direction, with a magnitude uniformly distributed in the range 0.0 to 1.5, and 0.0 to 3.0 pixels (expected value 0.75 and 1.5 pixels) to each of the visual goals. Final positioning accuracy was measured with the dial meter as in the repeatability experiment. The results indicate that for inaccurate visual goals, positioning improves significantly when using redundant visual information.
In the second experiment we placed the cameras 6m from the robot, thus decreasing the accuracy in tracking. Here it is more practical to measure errors as differences in joint encoder values. We observe a statistically significant improvement in joint space end point accuracy when increasing the number of features. The trajectory error, however, increases from about 1.5 pixels when using 4 visual features to 3 pixels when using 16. This is because the straight line trajectories requested by the trajectory generator are generally not achievable with large numbers of points. In practice this is not a problem, even though our low level visual planner operates only with straight lines, because we seldom need to make long movements under high DOF control. The high DOF control is typically used only for fine manipulation in a small space close to a goal, where a straight line trajectory is a good approximation of actually achievable configurations.