[Mirror pages: Europe , USA ]

Visual Servoing:

Uncalibrated Hand-Eye Coordination through Differential Visual Feedback

This is a short, example based, introduction to what we do in our lab with Differential Visual Feedback (DVFB) based robot control. A more complete treatment of the area can be found in the hypertext article: Jägersand M. Nelson R. On-line Estimation of Visual-Motor Models using Active Vision

Suppose the following scenario: We can watch a robot manipulator through camera(s). We initially don't know anything about the robot manipulator kinematics or placement, and nothing about the camera calibration.

Fig. 1. Basic robot arm control setup. For 3D and higher DOF control two cameras are used.

We are however allowed to watch a set of preselected visual features, and can observe how they change when we drive the robot joints. In short we have the problem of learning an unknown visual-motor model f between the robot control variables x and the observed visual features y, y=f(x).

At the same time we want to use this model estimate to perform a task. The task is specified in visual space, ie. we want to fulfill some set of visual goals {y}.

In this simple example the task is to move a PUMA 760 manipulator into a particular pose, specified by giving desired visual positions for small light bulbs on the end effector. The behavior of the Adaptive Differential Visual Feedback algorithm is shown in the mpeg movie . The goal here is to move the manipulator down to the surface of the cardboard box in the lower left corner in the movie.

First the robot locally explores the transfer function, by moving along three basis directions. This is used to make a local linear approximation of the transfer function at that point.

The linear approximation is inverted to give an estimate of how to move down to the goal, and a movement downward is started. During this downward movement a continuous differential visual feedback loop keeps the robot trajectory on a straight line in visual space. At the same time the new information gained during the movement is incorporated into the estimated visual-motor model, thus making it possible to learn and control systems with nonlinear models.

A linear or near linear model can be estimated using only one fixed visual-motor Jacobian. In practice this allows us to for instance globally transform between an affine viewing geometry frame and the Cartesian frame of a calibrated robot. However in general visual-motor coordinate frame transformations are not linear. For instance the transformations are non-linear between retinal (or camera) coordinates and the motor frame of a revolute arm (such as the human arm and most industrial robot arms). A single Jacobian, providing a linear estimate won't be a good approximation of the underlying visual-motor model, but a series of linear models, providing a first order spline approximation is adequate in most cases.

Fig. 2. Recursive estimation of the underlying non-linear visual-motor model using a piecewise linear estimator. M-peg movie (116k).

In our case the model is estimated recursively on-line, so the linear pieces are estimated as a time series of Jacobians. Fig. 2 shows how during a particular trajectory (yellow line) through motor space (x) the non-linear model is approximated piece-wise by a series of planes.

A Three step task: Pickup

More complex real tasks would typically be divided up into subtasks, each of which has its own subgoal in visual space. The task we look at here would be suitable for for instance picking objects up with a gripper. The first frame of the mpeg movie . shows the breakdown into subtasks.

First a long reach 3 DOF point move is performed, whose goal is to bring the end effector close above the object. This movement is controlled by tracking the uppermost light in two cameras (thus giving four y values), spaced approximately 30 degrees apart.

When positioned over the object the next subtask is to align the end effector at the correct angle for the close move in. In this subtask all 6 DOF (joints) on the robot arm are controlled, by measuring the 12 y's from tracking the 3 targets on the end effector in two cameras.

The same strategy works for insertion:

Fig. Visual plan to insert a piece of foam into it's box. Left camera image shown. Click on image to see the movie.

The task is to put a piece of packing foam back into the box it came from. To make identification of features easy round white dots are used both to mark points to be tracked during the movement, and the corresponding goal points.

To generate intermediate goals the only additional knowledge we need is the direction for physical up in visual space. We determine that from vertical lines usually present anyway in typical images. Here we added a few extra just under the goal points to get good local estimates.

The alignment phase between the 3DOF reach movement, and the 6 DOF insertion movement is used to calibrate the 14 by 6 dimensional over-determined Jacobian.

Once aligned, the only thing we need to do is to make the close move in around the object. This is done under 6 DOF control, as described above.

In the insertion as opposed to the pickup, the visual features will get occluded before we have completed the task. Knowing "up" in visual space and having a locally valid model in the visual motor Jacobian we can do the last downward movement open-loop, just solving for the joint movements Dx in (-up) = JDx.

Assembling sub behaviors to do real tasks: Solving a shape puzzle

Sequencing the visual control pickup and insert operations we described above we can solve assembly tasks. Here we insert kids type puzzle pieces into their corresponding slots doing transportation and fine manipulation under visual control.

In the last part of the insertion the puzzle piece gets occluded. We can still control it open loop using the learned visual-motor mapping, but here without feedback. To generate open-loop commands we establish a local visual coordinate system around the point of insertion, with it's first axis aligned with world space up, the other two lie (approximately) in the world space horizontal plane.

To insert and drop the piece a "canned" sequence of movements specified in this local frame is carried out. First the piece is moved down, the a combination of vertical and horizontal movements are done to scrape the puzzle piece off the magnet. Last the (hopefully) empty manipulator and magnet is withdrawn from the slot and put back in the view of the cameras, ready to go and get the next puzzle piece under visual control.

Fig. Visual space plan to pick up, and reach close to insertion point (left), and final alignment and insertion (right). Click on images to see movies of the robot laying the puzzle.

Non-rigid Manipulation in 14 (tracked) by 12 (controlled) DOF

High DOF control problems involving manipulation of non-rigid objects are very hard to solve with traditional model based robot control paradigms. The modeling of the problem can be messy and exact modeling may not even be suitable since the result of a particular command may not be deterministic.

At the end, the model we arrive at is typically not analytically invertible anyway, or analytic inversion may be very hard.

Contrast this to an adaptive Differential Visual Feedback based controller. Now we don't need any exact a-priori model, instead the robot learns and refines piecewise linear visual-motor models of the while it is performing its task in the real environment.

The mpeg-movie Translation shows two Puma robots manipulating a piece of foam through a mostly translating motion. The pose of the piece is determined by tracking the white dot tracking targets attached to the foam. A total of 14 features are tracked in two cameras. The manipulators are unconstrained, that is fully free to move in the whole 12 DOF control space space. The controller learns this 14 x 12 dimensional kinematic mapping between visual (perception) and motor (action) space.

Non-rigid foam folding

This next example shows a more interesting manipulation. We have the robots try to fold the foam. The left image below shows the start configuration, and the right image is the visual goal we give to the adaptive controller.

Figure 2: Initial configuration (left) and goal (right).

From a transfer function point of view this is a hard problem. The transfer function is highly nonlinear. We start from a near singular state (remember now singularities are not just robot singularities, but all singularities in the total transfer function from robot control space to visual space) The condition number of the visual motor Jacobian changes from initially bad 1700 to a much better 104 at the goal.

Naively running a Newton based search method as shown in FoldFail does not work. Looking at the the left manipulator's final position we see that joint 6 and 4 are lined up, so we are in a singular position. In principle the algorithm can deal with this (although at a reduced order of convergence 1.6->1), but here we ran into joint limits on 4 and 6 before we got out of the singularity.

We adopt a solution often used in numerical analysis to improve convergence, "inbaddning" (embedding). By giving a few (here 2) "way-points" in visual space we guide the algorithm around numerical (and mechanical) difficulties.

The "way-point" solution can be seen in the movie FoldSuccess .

The final goal is reached with an average accuracy of 1 pixel in visual space.


Martin Jägersand

Back to Perceptual Actions page

Back to MJ's home page