Robot manipulators with sensing capabilities restricted to sensing joint angles are limited in the kinds of behavior they can exhibit. Essentially, they can execute only a preprogrammed sequence of movements. Yet, with little variation, this is how robots currently used in industrial applications perform their tasks. One reason for this simple approach is the difficulty encountered when trying to handle complex sensing systems within the analytic modeling frameworks traditionally used in robotics. In typical industrial settings, complex sensing systems such as vision, if employed at all, are used only off-line when the system is in a well specified state. For example, a camera mounted on a manipulator (eye-in-hand) has been used to locate parts on a table, but the process is performed only when the camera is in a specific, prespecified position with respect to the work surface. During movement to pick up a part located by the vision system, sensory input from the (now moving) camera is not used. This approach simplifies modeling of the mapping between image and global world coordinate systems, but represents a rather limited use of visual sensing, and is somewhat lacking in robustness. Continuous feedback provides a more robust approach to sensory control, and recent work has begun to develop approaches for utilizing both sophisticated visual and somatic (e.g. force and torque) sensors in this way.
In this paper we propose and implement
a novel method for visual space
trajectory planning, and adaptive high Degree Of Freedom (DOF) visual
feedback control.
Our approach is inspired by previous work in differential feedback,
but differs in that,
while previous work has concentrated only on
the visual servoing algorithm, we integrate visual servoing
with higher level visual space trajectory and task planning.
We want to perform complex tasks composed of
several primitive behaviors, using both visual servoing and open loop
control, where the number of sensed and controlled signals varies
during the task.
To do this, we need a controller
capable of handling a wider variety of situations than
is possible using previous approaches.
We present an adaptive controller based on, and integrated with,
a visual space planning scheme, that
enables this higher-level, general task specification.
We experimentally evaluate its performance,
and then illustrate its usefulness in
several complex real-world manipulation tasks.
Differential visual feedback control in robotics was pioneered in the eighties, primarily by Lee Weiss and Arthur Sanderson [Weiss, Sand 87][Weiss 84][Sand, Weiss 83]. They proposed a classification of different types of visual control, and performed experiments in simulation with differential visual feedback controllers. Since then, real world implementations have been made by a number of researchers, e.g., [Yoshimi Allen 9x][Hosoda, Asada 94][Hollinghurst, Cipolla 1993][Hager et al 93][Corke 94][Wije et al 93][Feddema et. al. 92][Feddema et. al. 91][Feddema, Lee 90][Feddema Mitchell 89][Conkie, Chong 90b][Chong, Conkie 90a]. For a review of this work we direct the reader to [J\"agersand 94] or [Corke 93].
The adaptiveness of the controller is a key to the success of our approach. Previous approaches have either assumed that the system need only be calibrated once (e.g. [Chong, Conkie 90a]) using a set of specific ``test movements'', that it can be decoupled into a set of single variable adaptive controllers (e.g. [Weiss, Sand 87]), or that the system can be modeled using an ARMAX model (e.g. [Feddema, Lee 90]). These systems can only model a small class of visual-motor transfer functions. Limitations include requiring the transfer function to be near linear over the desired operating space, restricting the number and type of visual features that can be used simultaneously, and limiting the number of cameras and/or camera placement.
In order to avoid some of the above problems, we found it
helpful to look outside the mainstream of traditional control theory.
In particular we drew inspiration from numerical analysis, specifically,
the Broyden class of optimization methods for nonlinear problems (a
survey can be found in [Fletcher 87]).
Basically, we implement a Jacobian
estimator of the full local visual-motor transfer function
that updates the local model
using only information from the naturally occuring process noise.
This suffices for accurate feedback control over highly non-linear
global spaces.
We also use this local model,
learned while moving under feedback control,
to execute open loop movements while, for
instance, visual features are occluded during an insertion movement.
At the higher level end of task specification and trajectory planning, we have been inspired by the user friendly man-machine interfaces found in [Pook, Ballard 94][Kuniyoshi et al 92][Ikeuchi Suehiro 92]. Our ultimate aim is to provide natural, low bandwidth human interaction with our system, combining the advantages of autonomous operation and telemanipulation. In contrast to Ikeuchi and Kuniyoshi, we neither center our planning around a prior world model nor do we make use of a global world coordinate frame. Instead, all specification takes place in image space, either directly, via operator pointing in the image, or with the aid of more sophisticated image processing techniques such as automatic feature location or object recognition.
The paper is organized as follows: The next two sections focus on the differential visual feedback controller. Section 2 describes the controller we have implemented. In section 3 we present extensive experimental tests of the controller performance, some of which are applicable to a wide variety of visual servoing algorithms, showing and that visual servoing improves repeatability by a factor of 5 on a PUMA robot arm, that the adaptive controller outperforms a fixed gain controller. We also demonstrate good convergence properties for a 6 DOF controller, and show that using redundant visual information (i.e. tracking many more visual features than the number of DOF's we control) improves positioning, yielding subpixel accuracy in visual space.
The rest of the paper focuses on issues of visual space task specification and planning at a level above the differential visual feedback controller. In section 4 we describe how we represent tasks, and present an extended controller architecture capable of handling a wide variety of robot manipulation problems via visual space task specification and planning. Section 5 shows how visual space task planning is used to solve real world manipulation problems. First we show how to compose different modes of control to produce an insertion primitive. Then we use the system to solve a toddler-level puzzle, putting different shaped pieces into corresponding slots. Finally we have two PUMA robot arms learn to manipulate a flexible piece of foam held between their end effectors. Using only images of how the foam is to be positioned and deformed, the controller has to learn to perform the manipulations that achieve the goal while simultaneously controlling all 12 joints in the two arms. Section 6 describes how our system can be applied in a more general setting, using visual as well as other sensors, through what we call a Perceptual Action task level control scheme.