[Mirror pages: Europe , USA ]
[Other reserach: Image Based View Synthesis, Saliency and Attention ]

Perceptual Actions:

Vision Based Uncalibrated Robot Control

By Martin Jägersand.

Fig. Inserting and screwing in a lightbulb under visual control. PUMA robot arm and Utah/MIT robot hand. Mpeg movie from left eye-in-hand camera. More m-peg movies are availible on the demo page.

Traditonal robot control uses world or joint coordinate representations to describe goals, plan and execute moves. In a typical "engineered" industrial environment this works well. The environment, the robot model and the task is known. These a-priori models are used to transform the task (for instance derived from a 3D CAD drawing) into a sequence of robot motions.

However most natural settings are not structured or easy to model analytically. Traditional robotics have had little success in these un-engineered environments. We advocate describing goals in a reference frame more relevant to the task. Such a description makes use of a lot of sensory signals. We call the space of our descriptions Perceptual space. It is derived directly from sensory signals, in our work primarily from vision, without any a-priori calibration.

In this type of control sensory data tends to be hard to model, but abundant. To deal with this on the low level servo control we use adaptive or self calibrating methods which estimate on-line the model needed to carry out a manipulation task. Following is a list of papers available on-line describing these ideas, and their implementation:

Two short overviews available in html:

Three slightly longer papers. The first deals with the high level problem of how to program the robot to solve whole tasks in a sensory (visual) frame rather than the traditional world coordinate frame. The second focuses on the low level visual-motor model acquisition and servoing. The third describes experimental results on accuracy and convergence of the adaptive servoing. Together they give a more in-depth treatment on what we are doing.

For a longer version of all of the above, and some more see my thesis:

On-line model estimation allows us to solve some problems which would be very difficult using traditional methods. In the first of the following two papers we show how to fine-manipulate a grasped object in 6 DOF using a Utah/MIT dextrous hand. In the second a more unconventional "backwards" use of the estimated visual-motor model allow us to do view synthesis and visually simulate the execution of tasks on our uncalibrated hand-eye system.

Visual feedback control can potentially change the rules for how to build industrial robot arms. Lighter, less rigid, and mechanically less accurate arms can under visual feedback be positioned much more accurately than under only joint feedback. We want to experiment with very inexpensive robots, and a team of ME students, Chris Painter, Duncan Callaway and Susan Parente helped us design and build such a robot. More documentation on this project is underway, but for now you can view the project specification and a set of drawings of the robot.

Following is a listing of more uncalibrated hand-eye coordination work:


[Other reserach: Image Based View Synthesis, Saliency and Attention ]


Martin Jägersand                 Comments?