next up previous contents
Next: 7.2 Structure of the Up: 7 Visual Space Task Previous: 7 Visual Space Task

7.1 Overview of Task Specification

Traditional task specification and planning are done in a global Euclidean world coordinate frame, and both cameras and robots are calibrated in this frame. Our uncalibrated system does not have this frame, so task description is fundamentally different. Instead, the central frame is composed of the perception vectors tex2html_wrap_inline3464 . Goals as well as relevant aspects of current system state are specified in terms of these perception vectors. There should exist a direct correspondence between the perception vectors and the image appearance, so we can think of coding our task in terms of desired or goal images. As time progresses the system description changes on each of the different representation levels, namely raw image, feature image, perception and motor control, see Fig. 7.2. This describes a dynamic system, involving the real world as a part of it.

Our systems uses three visual teaching modes: The first is the ``point in image'' mode as shown in Fig. 7.1. In the second mode the operator shows a sequence of real images, depicting the task. The feature trackers are used to extract goal and subgoal perception vectors from the image sequence. In the third mode the operator symbolically describes the task, i.e. ``put the square puzzle piece in the square slot''. The first two modes require no image interpretation, and we have tried them successfully in several tasks. We have tried the third mode only in very simple environments, such as a computer checkers player, recognizing just the two kinds of pieces, the board geometry, and when the human opponent is in the field of view.

   figure1386
Figure 7.2: The representation levels in a vision based control system

Not all tasks are suitable to be defined entirely in terms of visual alignments. For instance during an insertion, an object may become totally occluded. Some manipulations are also inherently more suited to a description in a 3D (world) frame (i.e., move the light bulb to straight above the socket) or the joint frame (highly stereotypical motions such the rotations to screw in an object). We use local, object centered 3D frames, which can be Cartesian, or affine, depending on how much structure is available in the image. For instance identifying the three lines forming a corner on a rectangular box in two cameras, or two poses, gives an Euclidean base with the transformation P to image space. Using more views and tracking the anchor points improves the accuracy of the base [Kutulakos and Jägersand, 1995]. Often an incomplete base is enough (i.e. to move up we only need to identify a vertical line near the robot in each of the cameras). A manipulation tex2html_wrap_inline4440 described in base P is transformed to vision space by tex2html_wrap_inline4444 and to motor space by solving tex2html_wrap_inline4446 , using the (locally valid) Jacobian estimate obtained during manipulation. This gives our uncalibrated hand and arm system locally the same world frame manipulation capabilities as a calibrated robot. We wish to emphasize that we are not using this to simulate a calibrated robot, and then doing task planning and manipulation in traditional ways. Most parts of the manipulation are done in visual space.


next up previous contents
Next: 7.2 Structure of the Up: 7 Visual Space Task Previous: 7 Visual Space Task

Martin Jägersand