An active vision agent has control over its actions and can
observe the results of an action via the changes in visual appearance.
We study a robot agent, Fig. 4.1, in an
unstructured environment. The robot action reference frame is
joint space, described as desired joint angles,
,
and their time derivatives
. The changes in
visual appearance are recorded in a perception or feature
vector
. Suitable visual features can be drawn from
a large class of visual space
measurements [Weiss and Sanderson, 1987, Jägersand and Nelson, 1994]. For example, we have found that
features which
can be represented as point positions or point vectors in camera space are
suitable [Jägersand, 1994].
We track features such as boundary
discontinuities (lines, corners) and surface markings. Properties of
suitable visual measures are described in the next section. Redundant visual
perceptions (
) are desirable as they are used to constrain
the raw visual sensory information.
Figure 4.1: Visual control setup using two cameras.
The visual features and the agent's actions are related by a visual motor
model f, satisfying
.
The goal for the control problem is, given current state
and
, and the desired final
state in visual space
, find a motor command,
or sequence thereof,
such that
. Alternatively, one can view this problem as the minimization of
the functional
In a traditional, calibrated setting we have to know
two functions, camera calibration h and robot kinematics g,
either a-priori, or through some calibration process.
The accuracy at which we can represent objects and have the active
agent manipulate them depends on the accuracy of both of these
functions (
), since typically, feedback is
implemented only over the agent's internal joint values.
In our uncalibrated visual servoing the visual-motor model f
is initially unknown, but since f is implicitly represented in
the real hand-eye system, we can learn about it by physically
moving the robot and reading the visual response (a set of feature vectors
) corresponding
to the movement along a particular path in motor space (represented
as a set of motor states
).
At any time k we estimate a first order model, see Section
4.2,
.
The model is valid around the current system configuration
, and is
described by the ``image''[Corke, 1994] or visual-motor Jacobian
defined as
(10)
The Jacobian
not only relates visual changes to motor changes, as is
exploited in visual feedback control, but also highly
constrains the possible visual changes to the subspace
of solutions to
(remember
).
Thus the Jacobian
J is also a visual model, parameterized in exactly the degrees
of freedom in which our system can change. This is useful in a variety of
active vision tasks, as we will explore in this paper.
During the execution of a manipulation task, the agent progressively
learns more and more about the environment. The Jacobians thus
estimated form a piecewise
linear approximation of the visual-motor model. The trust region
controller we describe in Chapter 5 causes the model
estimate to be updated more often in difficult (non-linear) regions
of the model than in easy (near linear) regions. Thus the resulting
piecewise linear model is estimated on an adaptive size mesh.
Now the reader may ask: why is a visual-motor model a viewing model? We are more used to purely visually related viewing geometry models. Image measurements can be transformed to 6 DOF object pose using an Euclidean, affine, or projective model typically defined purely visually by a set of special ``fiducial'' points in the images. In a vision-only system this relates different (often redundant) measurements on the same object to each other, and allows precise pose estimation. In a visual motor system however, we can use the motor DOF's to constrain the visual measurements. For instance the image positions of several features tracked on a rigid object held by a robot arm or hand can be expressed in the low DOF motor (joint) frame, achieving the same accurate pose estimation as if a 6 DOF Euclidean, affine or projective frame was used. We exploit this in the next section for prediction and outlier removal in tracking.