next up previous
Next: 7 Discussion Up: Estimation of Visual-Motor Models Previous: 5 Use of Estimated

6 Experiments

Electronic m-peg videos of the demonstrations in this section are accessible through the Internet WWW. Click on the Video symbols in the figure captions. More demos are availible through this menu.

We have evaluated our visual servoing controller by: (1) Testing the repeatability and convergence of positioning. (2) Using it as a component in solving several complex manipulation tasks. These experiments are described in more detail in our technical report [9]. On a PUMA 761 we found that repeatability is 35 % better under visual servo control than under standard joint control. On a worn PUMA 762, with significant backlash, we got a repeatability improvement of 5 times with the visual control. The Utah/MIT dextrous hand has 16 controllable DOF's. The four fingers form a parallel kinematic chain when grasping an object. Fine manipulation of an object in the hand is much more difficult than with a robot arm [17]. Manipulating a rigid object in 6 DOF using the visual servo control we note a 73 % improvement in repeatability compared to Cartesian space joint feedback control.

We have evaluated the model estimation in 3, 6 and 12 controlled DOF. In 3 DOF we can successfully estimate the Jacobian without any prior models while carrying out a manipulation task. In 6 and 12 DOF a good initial estimate is beneficial. The estimate can be bootstrapped as we described in section 5.2. Redundant visual measures are beneficial, as they reduce errors due to tracking and visual goal specification. In a 3 DOF positioning task we tried using between m = 4 and 16 measures. Positioning accuracy increased 4 times with m = 16 compared to m = 4.

We have tried using the visual servoing in solving several complex, real world tasks, such as playing checkers, setting a table, solving a kids puzzle and changing a light bulb [10]. Visual space programming is different from conventional robot programming in that commands are given in image space rather than world space. This makes user friendly programmer interfaces easy to implement. We have tried having the robot operator: (1) Draw visual sketches of the desired movements. (2) Point out objects and alignments in video images. (3) Show an image sequence depicting the task (see [13]).

   figure346
Figure 5: Solving a kid's puzzle. Video 2 and Video 3 .

In fig.5 the PUMA robot solves a kid's puzzle under visual control. The operator points in an image, using the computer mouse, directing which piece goes where. The program decomposes this into transportation, alignment and insertion movements, and plans trajectories in visual space (white lines in fig.). For coarse transportation movements the centroid of the pieces are tracked using two stationary cameras. While aligning and fine manipulating, accurate pose information is given by tracking the corners of the pieces. Learned visual-motor and visual-world models are used for open loop manipulation when visual feedback is unavailable (e.g. due to occlusion during insertion). In fig. 6 the Utah/MIT hand is used to grasp and screw in the light bulb. The hand and cameras are mounted on a PUMA robot, which does the transportation movements.

   figure356
Figure 6: Exchanging a light bulb. Video 6

View synthesis

In fig. 7 and 8 on- and off-line view synthesis can be compared. The blurriness in the on-line case is a result of efficiency tradeoffs. To get real-time execution on a SUN sparcstation we use a piecewise linear visual-motor model, and a visual representation with only 24 eigen-images. In the off-line case we can allow time for preprocessing before playing the movie. In fig. 8 we use a third order spline model and 300 eigen-images. Another reason for using a first order model in the on-line case is that it has fewer parameters to estimate, and thus can be learned after only a few movements (as described in section 4.).

   figure367
Figure 7: Using the on-line linear model to synthesize a few small deviations ``twiddles'' from the real physical state in the bottom center image.

   figure372
Figure 8: Off-line simulation of an articulated PUMA robot here controlled in 3 DOF world space.

A 2D offline view synthesis part of this software, and a demo movie is availible by ftp.


next up previous
Next: 7 Discussion Up: Estimation of Visual-Motor Models Previous: 5 Use of Estimated

jag@cs.rochester.edu