next up previous contents
Next: 3.1 Derivation of a Up: On-line Estimation of Visual-Motor Previous: 2 Related Work

3 A Tutorial on Adaptive Visual Servoing

  This section is a tutorial-style, stepwise description of a basic visual servoing system. It focuses on a minimal, easy-to-implement system. Its main purpose is to show how little theory is required to make a barebones visual servoing system. Later chapters will derive the same functionality, but in a perhaps less intuitive way.

   figure126
Figure 3.1: Typical visual control setup uses two cameras placed so that they can observe the workspace from two different viewpoints. Placement is arbitrary, and the controller has no prior knowledge of the camera locations, their relation to the robot, or the robot kinematics. The set-points for the robot joint angles are specified by the vector tex2html_wrap_inline3462 , and visual perceptions, or features, are represented by the vector tex2html_wrap_inline3464 , These are related by an initially unknown transfer function tex2html_wrap_inline3466 .

The situation depicted in Fig. 3.1 shows a typical setup for visual servoing. For three and higher DOF control we typically want to use at least two cameras, spaced widely enough apart to give well-conditioned depth cues. Except for this criterion, camera placement is arbitrary. In the images taken by the cameras we extract and continuously track the camera coordinate positions of interesting features, for instance, the position of the robot end effector. We denote the vector of visual feature coordinates by tex2html_wrap_inline3464 gif. Features suitable for visual control can be drawn from a wide class of measurements in the image, and we will discuss this in Section 4.1. However, for the purposes of the derivations in this section, thinking of the visual features (perceptions) as the image coordinate locations of tracked features will suffice. The vector x represents the output signals from the controller. In this example we let x be robot joint angles.

In a traditional visual control setup we must know two functions, either a priori, or through some calibration process. The first is the camera, or vision calibration function h that transforms image space feature locations to a Cartesian world coordinate system (typically having its origin at the base of the robot). The second is the robot kinematic calibration function g that transforms desired robot end effector positions, given in world coordinates, into joint angles. The final robot positioning accuracy depends on the accuracy of both of these functions ( tex2html_wrap_inline3506 ), since feedback is only performed over the joint values.

It was realized early on that accuracy could be improved by using residual visual error, measured after a movement was executed, to perform a correction move. Weiss [Weiss, 1984] classified such schemes as position based (iterated) look-and-move approaches. The next step was to notice that if goal points were provided in image coordinates and realized through joint angles, then the intermediate world coordinate system could be completely eliminated. Weiss calls this image based look-and-move. Finally, if instead of supplying visual error only occasionally, our system servos the joints using a continuous visual signal, we have what Weiss calls image based visual servoing.

Here we present a method which can learn a sufficient mapping from an image based (vision space) specification to robot (joint space) execution primitives. The system requires no prior information of the visual motor function f; it estimates a time varying (and thus piecewise) linear model while performing the task, without explicitly introducing any extra learning steps or movements. It is capable of learning multivariate (coupled, vector valued) models with an arbitrary number tex2html_wrap_inline3510 of visual inputs and an arbitrary number tex2html_wrap_inline3512 of output control signals. The method places only very loose constraints on which models can be successfully learned.




next up previous contents
Next: 3.1 Derivation of a Up: On-line Estimation of Visual-Motor Previous: 2 Related Work

Martin Jägersand