next up previous contents
Next: 4.1 Visual Measures Up: On-line Estimation of Visual-Motor Previous: 3.3 Moving the Robot

4 Uncalibrated Jacobian Based Viewing Model

  An active vision agent has control over its actions and can observe the results of an action via the changes in visual appearance. We study a robot agent, Fig. 4.1, in an unstructured environment. The robot action reference frame is joint space, described as desired joint angles, tex2html_wrap_inline3692 , and their time derivatives tex2html_wrap_inline3694 . The changes in visual appearance are recorded in a perception or feature vector tex2html_wrap_inline3696 . Suitable visual features can be drawn from a large class of visual space measurements [Weiss and Sanderson, 1987, Jägersand and Nelson, 1994]. For example, we have found that features which can be represented as point positions or point vectors in camera space are suitable [Jägersand, 1994]. We track features such as boundary discontinuities (lines, corners) and surface markings. Properties of suitable visual measures are described in the next section. Redundant visual perceptions ( tex2html_wrap_inline3698 ) are desirable as they are used to constrain the raw visual sensory information.

   figure350
Figure 4.1: Visual control setup using two cameras.

The visual features and the agent's actions are related by a visual motor model f, satisfying tex2html_wrap_inline3466 . The goal for the control problem is, given current state tex2html_wrap_inline3534 and tex2html_wrap_inline3536 , and the desired final state in visual space tex2html_wrap_inline3540 , find a motor command, or sequence thereof, tex2html_wrap_inline3710 such that tex2html_wrap_inline3712 . Alternatively, one can view this problem as the minimization of the functional

(9)  displaymath3743

In a traditional, calibrated setting we have to know two functions, camera calibration h and robot kinematics g, either a-priori, or through some calibration process. The accuracy at which we can represent objects and have the active agent manipulate them depends on the accuracy of both of these functions ( tex2html_wrap_inline3506 ), since typically, feedback is implemented only over the agent's internal joint values.

In our uncalibrated visual servoing the visual-motor model f is initially unknown, but since f is implicitly represented in the real hand-eye system, we can learn about it by physically moving the robot and reading the visual response (a set of feature vectors tex2html_wrap_inline3724 ) corresponding to the movement along a particular path in motor space (represented as a set of motor states tex2html_wrap_inline3726 ). At any time k we estimate a first order model, see Section 4.2, tex2html_wrap_inline3730 . The model is valid around the current system configuration tex2html_wrap_inline3674 , and is described by the ``image''[Corke, 1994] or visual-motor Jacobian defined as

(10) displaymath3745

The Jacobian not only relates visual changes to motor changes, as is exploited in visual feedback control, but also highly constrains the possible visual changes to the subspace tex2html_wrap_inline3734 of solutions to tex2html_wrap_inline3736 (remember tex2html_wrap_inline3698 ). Thus the Jacobian J is also a visual model, parameterized in exactly the degrees of freedom in which our system can change. This is useful in a variety of active vision tasks, as we will explore in this paper. During the execution of a manipulation task, the agent progressively learns more and more about the environment. The Jacobians thus estimated form a piecewise linear approximation of the visual-motor model. The trust region controller we describe in Chapter 5 causes the model estimate to be updated more often in difficult (non-linear) regions of the model than in easy (near linear) regions. Thus the resulting piecewise linear model is estimated on an adaptive size mesh.

Now the reader may ask: why is a visual-motor model a viewing model? We are more used to purely visually related viewing geometry models. Image measurements can be transformed to 6 DOF object pose using an Euclidean, affine, or projective model typically defined purely visually by a set of special ``fiducial'' points in the images. In a vision-only system this relates different (often redundant) measurements on the same object to each other, and allows precise pose estimation. In a visual motor system however, we can use the motor DOF's to constrain the visual measurements. For instance the image positions of several features tracked on a rigid object held by a robot arm or hand can be expressed in the low DOF motor (joint) frame, achieving the same accurate pose estimation as if a 6 DOF Euclidean, affine or projective frame was used. We exploit this in the next section for prediction and outlier removal in tracking.




next up previous contents
Next: 4.1 Visual Measures Up: On-line Estimation of Visual-Motor Previous: 3.3 Moving the Robot

Martin Jägersand