Influential early work on the use of adaptive visual feedback
in robotics
was done by Lee Weiss and Arthur Sanderson at CMU in the early
eighties
.
In an article [Sanderson and Weiss, 1983], and later in greater detail in Weiss' thesis
[Weiss, 1984], a much referenced framework for a number of visual feedback
schemas was laid out. Weiss and Sanderson divide visual
feedback behaviors into four
different groups:
The look and move behaviors use a sequential application first of the vision routines to determine object pose. After vision is completed a motion is executed with vision turned off. The process may then be repeated. This results in a ``jerky'' motion, where the manipulator comes to a stop each time visual information is gathered. The servoing behaviors run vision and motion in parallel, continuously updating visual estimates, and correcting the motion on the basis of these estimates. Position based refers to the use of a Cartesian, absolute world coordinate system for describing the motion, and image based means describing the same motion in image coordinates instead, avoiding the use of a world coordinate frame altogether. Weiss and Sanderson's position (probably as robotics people) is that position based visual servoing warrants special emphasis, and is not merely a special case of image based visual servoing. Most work to date has been done using the world Cartesian frame in some way, because explicit modeling of the system is usually easiest that way. However, in a truly adaptive system such as ours, that advantage no longer holds, since no prior modeling of the geometry is done.
Weiss and Sanderson mention the idea of successive linear approximations for the joint-to-world coordinate system transformation, but they do not abstract it all the way to a single approximation of the whole transfer function between visual perceptions and motor commands. In later work [Weiss and Sanderson, 1987], they further abstract the control problem to a multilevel one. They define the global model to be the world coordinate frame. The local model, or the feature model, is what the robot can directly observe through its sensors. Thirdly the robot model describes the robot joint angles. Their view of task specification is still a very traditional one which uses world coordinates. This is perhaps the underlying motivation for having the world coordinate frame as the top of the hierarchy of models.
Weiss and Sanderson's contribution is that they suggested
that feature values in the image could be servoed on
directly using adaptive controllers. However, in traditional
control theory style, they want to close single variable control loops
on feature control signal pairs
.
An algorithm to choose these pairs is presented, whose essential
function (in our framework) is to rearrange the Jacobian into a
diagonal dominant matrix, and then throw away the off-diagonal elements.
This algorithm is executed initially. Subsequently, whenever it is
suspected that the transfer function has changed so much that a
reassignment of the feature control signal pairs is advantageous.
Note that this does not give the optimal moves for a coupled
problem, and in practical application of the algorithm,
one would have to choose nearly orthogonal
visual features. To get
these feature values from their world coordinate task specification they
need accurate world models, as well as accurate transformations to
feature space.
Weiss and Sanderson perform tests only in simulation. They claim that their system works fine in two and three DOF simulations, but report problems with a five DOF test. A number of groups have since tried out, experimentally, various aspects of Weiss and Sanderson's proposals. There are two main lines in this later work: those working with traditional (sometimes adaptive) control theory methods on decoupled systems, and those utilizing the fully coupled Jacobian. One forward looking aspect of Weiss and Sanderson's work, which surprisingly has not been exploited much so far, is their proposal to use not only direct visual sensory signals, but also their numerical transforms. For vision they give the following list of possible features to servo on: areas, ratio of areas, centroid, angles, points, and edge lengths.
An early practical implementation of a multivariate (coupled) differential visual feedback system was done by Chongstitvatana and Conkie [Conkie and Chongstitvatana, 1990] at Edinburgh. They control a three DOF manipulator using two cameras to measure the relative visual position between the object and the goal. By mounting a simple gripper on the arm, the utility of the system is demonstrated in a block stacking task. Chongstitvatana and Conkie use the full Jacobian, and thus control a coupled system. They get the Jacobian estimate by test movements along the basis of the three control variables. This constant Jacobian turns out to be accurate enough for subsequent control in their experiments. They also mention that they can update the estimate using a sequence of moves, but no description is given. To add to the confusion, they also reference [Weiss and Sanderson, 1987] on this, but Weiss' adaptive updating works only for single variable control.
Influential work showing how to practically implement a sophisticated
visual servoing system was done by
Feddema, Lee and Mitchell, who from 1989 to 1992 published a series of
papers (e.g. [Feddema and Mitchell, 1989, Feddema and Lee, 1990, Feddema et al., 1991, Feddema et al., 1992]). They implement a system that,
despite having
two cameras, is essentially monocular. The cameras serve different purposes.
One is an eye-in-hand camera, and the other a fixed mounted scene camera.
The reason for having two cameras with different purposes is explained by
their approach to finding the Jacobian. Feddema et al. point out that in the
adaptive system of [Weiss and Sanderson, 1987], erroneous movements are made initially,
before adaptation has taken place
.
Similarly, the system of Chongstitvatana and Conkie requires three test
movements (in general as many as the number of DOF controlled)
to find their Jacobian by finite differences.
Instead of initially making a few movements unrelated to the task,
Feddema et al. argue that a rough estimate of the Jacobian should be given
by world models. In the case of an eye-in-hand camera, this is particularly
suitable, because the camera calibration is easy, and moreover,
the Jacobian can be easily parameterized in terms of depth in the
image for many (practically important, but not all) robot geometries,
giving the advantage of an (almost always) easy to model Jacobian.
A main contribution of Feddema et al. is a ``teach by showing'' system, where the robot is taught a task by manually segmenting it, taking the robot through the different configurations, and specifying which features should be servoed on at what point. However the features used in this system are still essentially just of the point type. Feddema et al. also point out that the problem of long delays in visual feedback system can be tackled by keeping the low level joint control of traditional robots, and using the vision system to provide set-points for the joint control. The vision cycle time is 28 ms and the individual joint loops of the Puma run at .875 ms.
The task function approach to robot control is an organized way of combining several control objectives into one task. In [Espiau et al., 1992] Espiau, Chaumette and Rives show how to cast calibrated visual servoing in this framework.
Wijesoma et al. at Cambridge, in a well known article [Wijesoma et al., 1993], provide an overview of the accuracy advantages of visual feedback control in a situation where calibration is poor. They present several experiments with a two DOF system, comparing a transformation-based world coordinate control scheme to a differential visual feedback scheme using a static (constant) Jacobian. The inherent problem with slow sampling and long delays of TV-camera-based vision systems is tackled differently by Wijesoma et al. They assume that the environment outside the robot changes slowly, and thus can be tracked by slow visual routines. To observe the robot motions at a higher rate they mount a marker beacon on the end effector, which they track with special purpose hardware at a fast, 5 ms rate.
Hager, Chang and Morse in [Hager et al., 1993] implement a three DOF system which in essence is similar to that of Chongstitvatana and Conkie [Conkie and Chongstitvatana, 1990]. The problem of dealing with the four measurements obtained from tracking a point in two cameras is solved by positioning the cameras so that one axis (the y axis) is identical in both. Hager et al. mention the possibility of updating the Jacobian approximation in an adaptive scheme, and say that they are working to find a method for that.
Hollinghurst and Cipolla in [Hollinghurst and Cipolla, 1993] present a different approach to uncalibrated hand-eye coordination, which nevertheless is often grouped together with the differential visual feedback methods. An initially uncalibrated camera is used and, through a few test movements, the parameters of an affine viewing geometry model are identified, and this model is related to the robot model. A limitation of this approach is that a global mapping is used between the visual and world coordinate frames, and thus the robot joint-to-world frame calibration needs to be known a-priori. They develop a method to move the robot by applying inverse affine stereo transformations which transform relative positions from the visual frame to the world coordinate frame.
In later work the structure of the affine or projective viewing geometry is utilized to aid in specifying movements. Cipolla et al. [Cipolla et al., 1994] track the hand of a human and use the affine frame to construct the intersection of the hand pointing ray with, say, a table, also identified in the affine frame. This is used to program robot movements. Hager [Hager et al., 1995] develops a ``library'' of visually specified constraints, such as the alignment of lines and planes, which allows movements to be specified at a higher level than primitive feature alignments.
We have previously shown how to extend visual servoing to high DOF control systems (typically from 3 DOF to 6 DOF or 12 DOF), and how to estimate a time varying visual-motor model on-line during the manipulation [Jägersand, 1994, Jägersand and Nelson, 1994]. One main difference between our work and the previous adaptive methods by Feddema et al. and Papanikolopoulos et al. is that we use non-parametric estimation, which doesn't require any a-priori modelling at all. This allows us to demonstrate visual servoing for non-rigid manipulation and in more DOF's than previously shown. Several recent conference papers [Jägersand and Nelson, 1995, Jägersand, 1996, Jägersand et al., 1997, Jägersand, 1997] define the core of our current research on how to combine low level visual servo control with higher level visual space task specification. We also show how to use the estimated visual-motor models for view synthesis, and how to control the Utah/MIT hand.
Hosoda and Asada in [Hosoda and Asada, 1994] use essentially the same Jacobian estimation technique as we do, and they are the only other group we know of who do fully coupled control without any prior modeling. In [Hosoda et al., 1995], they introduce a limited amount of viewing geometry constraints in order to do visual path planning with obstacle avoidance.
Another approach to Jacobian estimation is taken by Ritter Martinez and Schulten [Ritter et al., 1992]. They use neural network learning techniques to organize a mapping of 3 DOF Jacobian parameters. They test the approach in simulation and find that about 20 000 test movements are required to learn the mapping (of 4 by 3 Jacobians). This is not a reasonable number of test movement to perform on a real manipulator. The visual-motor model changes typically several times during a manipulation; for instance when grasping a new object. Since the exact grasp typically is not known it would be difficult to construct a simulation to train the system on the new real world configuration. It may be useful for more repetetive control problems, such as eye-movements where the mapping does not change with time.