next up previous
Next: 2 Viewing model Up: Estimation of Visual-Motor Models Previous: Estimation of Visual-Motor Models

1 Introduction

In an active or behavioral vision system the acquisition of visual information is not an independent open loop process, but instead depends on the active agent's interaction with the world. Also the information need not represent an abstract, high level model of the world, but instead is highly task specific, and represented in a form which facilitates particular operations. Visual servoing, when supplemented with on-line visual model estimation, fits into the active vision paradigm. Results with visual servoing and varying degrees of model adaption have been presented for robot arms [1, 3, 2, 5, 6, 9, 13, 15, 16, 18]gif. Visual models suitable for specifying visual alignments have also been studied [19, 8, 7]. However, the focus of this work has been the movement (servoing) of the robot, not on on-line estimation of high DOF visual-motor models. In this paper we focus on the model exploration aspect and present an active vision technique, having interacting action (control), visual sensing and model building modules, which allows the simultaneous visual-motor model estimation and control (visual servoing) of a variety of robotic active agents.

We place the following requirements on the active visual model acquisition:

*
It should be general enough to estimate an arbitrary, but smooth visual motor model, without assuming any particular viewing geometry, camera configuration or manipulator kinematics.
*
It should be efficient in that it should be usable for control after just having observed a few agent movements, and after that it should be able to adapt to a changing visual motor transfer function.
*
It should not interfere with the visual motor task. In normal operation, no extra ``calibration movements'' should be needed for the model estimation.

A combined model acquisition and control approach has many advantages. In addition to permitting uncalibrated visual servo control, the on-line estimated models are useful for (1) prediction and constraining search in visual tracking [13, 2], (2) perfomring local coordinate transformations between manipulator (joint), world, and visual frames[9, 13], and (3) synthesizing views from a basis of agent poses [11]. We have found such an adaptive approach to be particularly helpful in robot arm manipulation when carrying out difficult tasks, such as manipulation of flexible material [9, 13], or performing large rotations for exploring object shape [12]. For a dextrous multifinger robot hand, such as the Utah/MIT hand, the fully adaptive approach is appealing because dextrous manipulation of a grasped object is much harder to model accurately than a typical robot arm system, where the object is rigidly attached to the end effector.

In this paper we present four main contributions:

*
A Broyden-type Jacobian estimator allows the online estimation of a full coupled visual motor Jacobian, allowing control both without initial models or estimates, and over significantly non-linear portions of the transfer function.
*
Visual space trajectory planning and control is used to ensure convergence to a global, rather than a local minimum.
*
A trust region method is used to give convergence for difficult visual-motor transfer functions, and makes the system more general, without the manual tuning of step discretization parameters.
*
We present an experimental evaluation of visual-motor model acqusition and visual feedback control of the Utah/MIT hand, and PUMA robot arms.


next up previous
Next: 2 Viewing model Up: Estimation of Visual-Motor Models Previous: Estimation of Visual-Motor Models

jag@cs.rochester.edu