The previous work was done with fixed cameras and the robot manipulator only. An obvious disadvantage of fixed cameras is that the workspace is uniformly sampled in the digital image. Often we would like high resolution visual information in a few special areas, such as around an object when grasping and picking it up, or for doing fine manipulation when aligning an object for insertion. Variable resolution visual information can be achieved in a variety of ways. The human head can move independently of the hand and thus achieve the best viewpoint availible. Such a system could be built using two robot arms, one holding a camera head, and the other doing the manipulation. An often sufficient compromise is to go with an eye-in-hand approach, mounting the cameras on the same arm doing the manipulation. The eye-in-hand approach gives good overviews by raising the manipulator high over the workspace during transportation movements, but brings the cameras close, for precision, when we pick up and manipulate objects.
Figure 7.7: Top left: Initial position showing eye-in-hand, two-camera
configuration. Following 4 images: Moving in over light bulb to pick up,
and preshaping hand for grasp. Pick up bulb and transport to socket.
Screw in bulb using the Utah/MIT hand. Last 4 images:
The corresponding sequence
from the controller's eye-in-hand camera. Video 6
In Fig 7.7 we see the hand-in-eye approach applied to installing a light bulb. The same visual servoing controller is used as in the previous experiments. Here the adaptiveness is crucial to handle the scale changes as we move along the optic axis of the image. The black and white checkered tracking targets seen in the pictures facilitate tracking, and are used for goal specification. For instance the goal for pick up is to align the nearest tracking target on the bulb between the nearest two fingers on the hand. The latter position remains fixed with respect to the cameras in the eye-in-hand setup and thus need not be tracked.
When the robot has visually aligned the bulb over the socket the hand performs a downward and rotating manipulation on the bulb to screw it in. The visual alignment goal for the rotation (i.e. what angle does the thread end at) is unknown, but the needed manipulation is simple: keep turning until the bulb lights up and stays lit. The hand manipulations are performed as a canned motion sequence, specified as a downward sloping rotation in hand space, without visual alignment feedback to control the movement.