This section describes the differential visual feedback controller we use, and issues relevant to its design.
Figure 1: Typical visual control setup uses two cameras placed so that
they can observe the workspace from two different viewpoints. Otherwise,
placement is arbitrary, and the controller has no prior knowledge of
the camera locations, their relation to the robot, or the robot
kinematics. The setpoints for the robot joint angles are specified by the
vector
, and visual perceptions, or features are represented by the
vector
, These are related by an initially unknown transfer
function
.
The situation depicted in fig. 1 shows a typical setup
for visual servoing. For three and higher DOF control we typically
want to use at least two cameras, spaced widely enough apart to give
well conditioned depth cues. Otherwise, camera placement is fairly
arbitrary.
In the images taken by the cameras we extract, and continuously track,
the positions of interesting features, for instance, the
position of the robot end effector. We denote the vector of visual
features by
.
Features suitable for visual control can
be drawn from a much wider class of measurements in
the image, and we will discuss this later.
However, for the purposes of the derivations in this section, thinking of
the visual features as image the location of tracked features will suffice.
The vector x, represents the output signals from the controller. In this example, for conceptual simplicity, we let x be robot joint positions. In our actual implementation we will use a combination of position and velocity control.
In a traditional visual control setup we have to know
two functions, either a-priori, or through some calibration process.
The first is a camera, or vision calibration function
that transforms image space feature locations to a Cartesian world
coordinate system (typically having its origin at the base of the robot).
The second is the robot kinematic calibration function
that transforms desired robot end
effector positions, given in world coordinates, into joint angles.
The final robot positioning accuracy depends on the accuracy of both these
functions (
), since feedback is only performed
over the joint values.
It was realized early on that accuracy could be improved by using residual visual error, measured after a movement was executed, to perform a correction move. [Weiss 84] classified such schemes as Position based (iterated) look-and-move approaches. The next step was to notice that if goal points were provided in image coordinates and realized through joint angles, then the intermediate world coordinate system could be completely eliminated. Weiss calls this Image based look-and-move. Finally, if instead of supplying visual error only occasionally, our system servos joints using a continuous visual signal, we have what Weiss calls Image based visual servoing.
We present a system which can learn a sufficing mapping from image
based (vision space) specification, to robot (joint space) execution
primitives.
The system requires no prior models of the transfer function; it
learns the model while performing the task without explicitly
introducing any extra learning steps or movements.
It is capable of
learning multivariate (coupled, vector valued) transfer functions with
an arbitrary number
of visual inputs and
an arbitrary number
of output
control signals.
The method places only very loose constraints on which
transfer functions can be successfully learned.