next up previous contents
Next: 7.7 Eye-in-Hand Manipulation: Installing Up: 7 Visual Space Task Previous: Experiments with Visual Specification

7.6 Combining Primitives: Solving a Child's Puzzle

In this demonstration we assemble control primitives from three classes into a program capable of solving a shape sorter puzzle consisting of the insertion of differently shaped pieces into correspondingly shaped holes. The three modes used are the 3 DOF transportation for moving the pieces around above the table, fine manipulation for alignment and insertion, and open-loop manipulations using a local model for dropping of the pieces inside the box while the pieces are occluded. A minor variation of the transportation primitive, a guarded move, is used for the pick up, to move the manipulator until it touches the puzzle piece to be picked up.

   figure1530
Figure 7.5: The physical setup in the puzzle demonstration. A Puma robot equipped with a magnet on a stick is used to pick up, move and insert different shaped puzzle pieces into a box with correspondingly shaped holes. The robot actions are viewed by two uncalibrated cameras, giving visual input to a visual space planner and visual feedback controller

Fig. 7.5 shows the setup of the puzzle experiment. The cameras are placed fairly arbitrarily. The only requirements are that they should be able to see some relevant features during the actions, and have a sufficiently different viewpoint to get a reasonable condition number on the motor visual Jacobian. The system does not possess any geometric information about either the cameras or the manipulator, and must solve the task using only visual information.

The dimensions of the puzzle pieces are in the 5 to 10 cm range. The holes are cut about 1mm bigger than the pieces. A 1 pixel image offset represents from 2 to 0.5 mm physical movement for a movement in the image plane, depending on where in the image the movement is made. The constraints on the problem are about as tight as can be solved, given limits inherent to the vision system and manipulator.

   figure1536
Figure 7.6: Different modes of control used in the puzzle solving manipulations. These images are taken from the controller's right camera. The white lines show controller visual trajectory plans for carrying out the manipulation. (Video 2 and Video 3).

The basic insert operation consists of five phases. In the first, the robot moves the manipulator to pick up a puzzle piece. The second phase is the actual pick up movement. It is a guarded visual servoing movement down towards the puzzle piece, but with the minor variation that the goal condition is not a visual alignment, but a contact condition. The third phase is a long reach or transportation move to bring the puzzle piece to a location approximately above the hole. A fine manipulation move then aligns the puzzle piece for insertion. Finally, an open loop process inserts the piece and scrapes it off against the side of the hole. The two new modes of control are described in Table 7.2.

The two first frames of Fig. 7.6 show the visual space plans made to pick up and insert the leftmost puzzle piece. An estimate of ``up'' in image space is known to the algorithm. It is given once by inserting a long vertical, easily identified object, here a pencil, in the image. We have not implemented an object recognition system for this application. We do not anticipate that this would be hard to do, but it was not the main aim of our work. Instead while the program runs, it shows the user images of the scene, and asks the user to point out with the mouse where in the image a puzzle piece is located, and the corners of the slot into which it is supposed to be put. After that the system uses the ``up'' information to make the visual plan, and starts moving according to the plan. The transportation and fine manipulation movements are executed as described in the previous section.

In this application, the robot ``grasps'' objects using a pair of magnets, one in the object to be picked up and another in the tip of the manipulation tool. The guarded pick-up move is implemented by using the flexibility in the tool. When the manipulator touches a puzzle piece, the magnet, whose visual position is monitored by a feature tracker, will stop moving downward. This is detected, and the movement is stopped. Technically the contact detection is done by watching the model error (difference between predicted and actual movement), and terminating the guarded move on a steep increase in model error. After pick up, the piece is transported to a goal location above the appropriate hole, determined using the image space estimate of up. Visual feedback during the fine manipulation is obtained by tracking the lower corners of the puzzle piece that are visible in each camera. The goal for the fine manipulation is to bring these corners to the corresponding corners of the hole, previously pointed out by the operator.

That is as far as we get with visual feedback. The rest of the insertion and drop off is done using the local model of the vision to motor space mapping provided in the tex2html_wrap_inline4496 Jacobian estimated during the long reach movement. We already know the vector representing world space (physical) up in visual coordinates ( tex2html_wrap_inline4498 ). To put the piece in we solve tex2html_wrap_inline4500 where a is some suitable fraction of the height of the box, measured in pixels in image space. To deposit the puzzle piece in the box, we release it from the magnet by scraping it on the sides of the hole. For this we need estimates of sideways movement. We can't get a very exact estimate without trying to identify sideways going lines. For the scraping it turns out to be enough just to use two lines in vision space orthogonal to our ``up'' vector.

When the piece is deposited the user is requested to click on a new piece and the process is repeated. Thus used in tele-assisted, or semi-autonomous mode, the robot was able to solve the puzzle. The last frame in the puzzle sequence, Fig. 7.6 shows an actual trajectory (solid) compared to the planned (dotted).

Our current implementation has several limitations. First, because the tracker needs fairly good corners, we can't handle complex curved shapes. The system also has difficulty with large rotations of the pieces. In order to handle this, the visual planner needs to make visual plans corresponding to circles in world space (the straight line approximation is no longer good enough). As features become occluded during large rotations we we need to dynamically take them out from the Jacobian model, and add new features to replace them. This is relatively easy to do by training the new features for a few ( tex2html_wrap_inline4504 depending on the filtering) Jacobian updating cycles, without having them affect the movement, and then put them into the feedback loop. The harder part is a correspondence problem: For each one of the new features a goal value has to be found, and asking the user becomes tedious. Some higher-level recognition process could be used here.

  
Table 7.2: Two additional classes of control introduced in the puzzle task, the guarded move and open loop move.


next up previous contents
Next: 7.7 Eye-in-Hand Manipulation: Installing Up: 7 Visual Space Task Previous: Experiments with Visual Specification

Martin Jägersand