next up previous
Next: 6 Experiments Up: Estimation of Visual-Motor Models Previous: 4 Model Estimation

5 Use of Estimated Visual-Motor Models

Estimated visual-motor manipulation models are useful in a variety of settings. We show how to use them for visual servo control without the need of prior models on both robot arms and hands. We also show how to use the inverse visual-motor model to generate images, thus simulating the actions of an articulated active agent. Internally in the system the visual-motor models are useful in a variety of ways. They can serve as visual representations for recognition, as models for filtering, tracking and search reduction. On a higher level, vision based control can lead to user friendly visual robot ``programming'' or ``teaching'' interfaces, suitable for use in unstructured, hard to model environments. In previous work [13] we have shown how a visually guided robot arm can be instructed to solve a variety of hand-eye tasks by: (1) A sequence of images, describing the task at hand. (2) By having a human draw a sketch describing the visual alignments in the task. (3) Using a video image of the work area and having a human operator interactively point out (with the mouse) to the robot what to do (vision based telemanipulation).

5.1 Control

  The active agent specifies its actions in terms of desired perceptions tex2html_wrap_inline750 . We need a control system capable of turning these goal perceptions into motor actions tex2html_wrap_inline912 . A simple control law, occuring in some form in most visual servoing research (e.g. [3, 19, 16]) is

(5) displaymath935

where K is a gain matrix. In a discrete time system running at a fixed cycle frequency (at or below the 60Hz video frequency), the gain K turns into a step length tex2html_wrap_inline918 : tex2html_wrap_inline920 , where tex2html_wrap_inline792 is the (least square) solution to the (over determined) system tex2html_wrap_inline924 . Dynamic stability of the robot at this low sampling frequency is achieved by a secondary set of high bandwidth joint feedback controllers.

This popular controller however has two major deficiencies. First, even for a convex problem ( tex2html_wrap_inline926 convex) it is not guaranteed to be convergent[26], and second in the case of a non convex problem it often does not converge at all [26]. Previous work has overcome this problem by making only a single, small distance move within a relatively smooth and well scaled region of f. To solve whole, real tasks this is not a viable solution. We adopt a trust region method [26] similar to the well known Marquart step length ( tex2html_wrap_inline918 ) adaption schema to solve the first problem. In the trust region method, the current tex2html_wrap_inline918 indicates the distance for which the estimated model is valid. For the second we use a technique known in numerical analysis as ``inbäddning''[25] or homotopy methods[23], which involves the generation of intermediate goals or ``way points'' along the way to the main goal tex2html_wrap_inline750 , transforming a globally non convex problem into a set of locally convex subproblems. Intuitively both these techniques aid to synchronize actions with model acquisition, so that the actions never run too far ahead before the local model has been adapted to the new environment. For details and theoretical properties of these two methods see our control theory paper [14].

5.2 Visual space task representation and planning

  To date work in image/feature space visual control has demonstrated low level servoing behaviors, achieving a single visual alignment, eg. [16, 19]. A remaining principal challenge is how to specify complex tasks in visual space, divide them up into subtasks, plan trajectories in visual space, and select different primitive visual servoing behaviors and visual goals. We suggest a semi-automated way of solving these high level problems, providing an image based programming interface, as shown in fig. 2. The user specifies the changes he wishes to bring about in the world by clicking on the objects, and pointing out their desired locations, and alignment features. If this is done interactively we have a very low bandwidth telemanipulation system, isolating the user from the difficult low level control problems. When it is done off line, we have a user friendly programming interface.

   figure238
Figure 2: Vision based programming interface.

Traditional task specification and planning is done in a calibrated global Euclidean world coordinate frame. Our uncalibrated system does not have this frame, so task description is fundamentally different. Instead the central frame is composed of the perception vectors tex2html_wrap_inline846 . Goals as well as relevant aspects of current system state are specified in these. There should exist a direct correspondence between the perception vectors and the image appearance, so we can think of coding our task in terms of desired or goal images. As time progresses the system description changes on each of the different representation levels, namely raw image, feature image, perception and motor control, see fig. 3. This describes a dynamic system, involving the real world as a part of it.

Our systems uses three visual teaching modes: The first is the ``point in image'' one as shown in fig. 2. In the second the operator shows a sequence of real images, depicting the task. The feature trackers are used extract goal and subgoal perception vectors from the image sequence. In the third the operator symbolically describes the task, i.e. ``put the square puzzle piece in the square slot''. The two first require no image interpretation, and we have tried them successfully in several tasks. The third we have tried only in very simple environments.

   figure246
Figure 3: The representation levels in a vision based control system

Not all tasks can be defined entirely in terms of visual alignments. For instance during an insertion, an object may become totally occluded. Some manipulations are inherently more suited to description in a world frame (i.e., move the light bulb to straight above the socket) or the joint frame (highly stereotypical motions such the rotations to screw in an object). We use local, object centered world frames, which can be Cartesian, or affine, depending on how much structure is available in the image. For instance identifying the three lines forming a corner on a rectangular box in two cameras, or two poses, gives an Euclidean base P. Using more views improves the accuracy of the base[12]. Often an incomplete base is enough (i.e. to move up we only need to identify a vertical line near the robot in each of the cameras). A manipulation tex2html_wrap_inline942 described in base P is transformed to vision space by tex2html_wrap_inline946 and to motor space by solving tex2html_wrap_inline948 , using the (locally valid) Jacobian estimate obtained during manipulation.

We now describe how to construct a mid level primitive from low level servoing behaviors. Many tasks contain subtasks involving a long range transportation move followed by a short range fine manipulation. Our results from evaluating the controller suggests that for the most robust model estimation and control we should control as few DOF's as possible. To transport an object described by one point we only need 3 DOF. To manipulate a rigid object we need 6. As noted earlier when controlling 3 DOF our algorithm needs no prior models.

To bootstrap the 6 DOF control we use the model estimated during the 3 DOF stage. Fig. 4 shows the visual part of an insertion sequence. For the 3 DOF long range transportation one of the features (here white dots) is extracted and tracked in two cameras. For the fine manipulation 14 features are used by tracking 5 points in one camera and 2 in the other. When switching between 3 and 6 DOF mode, the first three columns of the tex2html_wrap_inline950 DOF Jacobian are filled from the tex2html_wrap_inline952 DOF Jacobian, the last three with random numbers.

displaymath954

   figure274
Figure 4: Left: Planning the different phases of an insert type movement consisting of reaching and fine manipulation movements. Right: Performing the planned insertion. Video 1

The 6 DOF alignment serves two purposes. It aligns the piece in 6 DOF, obtaining the correct initial pose for the 6 DOF fine manipulation. Also during this phase the bootstrapped 6 DOF Jacobian is updated to an accurate estimate, allowing high prescision moves in the later fine manipulation stage.

5.3 View synthesis

View synthesis can be done offline by generating a movie sequence of an agent performing a task given a corresponding control command sequence ( tex2html_wrap_inline962 ), and an a-priori identified visual motor transfer function f. We can do this by inter and extrapolating the learned visual-motor transferfunction. We tried piecewise first and third order spline models for this. More interestingly, the online case is to generate arbitrary simulated views, representing (reasonably small) deviations tex2html_wrap_inline792 from the current state of the real physical agent, while at the same time learning and refining the model used to generate those views. We describe a telemanipulation application, where the teleoperator controls the agent, but for instance long delays, or limited bandwidth between the teleoperation site and the agent prevents immediate and/or full frame rate visual feedback to the operator. Instead we use the view synthesis method to generate the immediate visual feedback, and use the slower real visual feedback to calibrate the model used for the view synthesis.

Through observation of the process by the method in section 4 we have an estimate of the current visual motor Jacobian J. Consider one step in an online algorithm. At time i we have current image tex2html_wrap_inline972 , perception vector tex2html_wrap_inline974 , visual motor Jacobian tex2html_wrap_inline976 , and current agent state tex2html_wrap_inline978 in motor space. The teleoperator makes a motor command tex2html_wrap_inline792 , so tex2html_wrap_inline982 . Our model estimate predicts the changes in the perception vector tex2html_wrap_inline846 :

(6) displaymath1005

The simulated image tex2html_wrap_inline986 resulting from the command is generated from tex2html_wrap_inline988 , as described in section 3.2 and shown to the operator.

After some delay d, and possibly at a lower rate than full frame rate, the real image arrives. From it the real measured feature vector tex2html_wrap_inline992 is extracted, and the innovation term tex2html_wrap_inline994 is incorporated (added) into the current ( tex2html_wrap_inline996 ) perception vector estimate. Now we have tex2html_wrap_inline998 and tex2html_wrap_inline792 and can update the Jacobian with the model estimation method shown in section 4. The online method thus estimates, and uses successive linear models of the visual motor transfer function, each model valid around a particular state tex2html_wrap_inline978 . How long a delay d we can tolerate depends on the validity range for our linear model (which can be found on line, see section 5.1), which in turn depends on the visual motor transfer function of our system, and on the visual measures we choose.


next up previous
Next: 6 Experiments Up: Estimation of Visual-Motor Models Previous: 4 Model Estimation

jag@cs.rochester.edu