next up previous contents
Next: 1.2 Sources of Inspiration Up: 1 Introduction Previous: 1 Introduction

1.1 The Need for a New Approach to Robotic Manipulation

A human can easily pick up a visible object, and can even watch a whole task and transform the visual information into the necessary motor (muscle) movements to carry out the task. In the other direction a human can visually imagine the execution of a motor task. When picking up a tool, or any object, a person can quickly learn the new manipulations needed to use it. People use these capabilities all the time in their daily lives.

By contrast, conventional robotics solves none of these tasks easily or reliably. The problem of making robot manipulators solve human like hand-eye tasks is not due to lack of suitable manipulators. Robot arms, and more recently robot hands have been built, inspired by and with degrees of freedom similar to the human arm and hand [Andeen et al., 1988, Jacobsen et al., 1986]. Instead the problem lies mainly in the lack of a suitable programming and control methodology for these advanced robot arm and hand manipulators. In this thesis we set out to develop such a methodology; one that will allow robot manipulators to solve everyday human tasks in hand-eye coordination.

In a conventional visual robot control systems world coordinate representations in a Cartesian frame are used to unify visual and motor space data. This is attractive in industrial applications because it allows the robot operator or programmer to describe tasks in a coordinate frame natural for an engineer. However, this world coordinate representation is not native to either the vision system or the robot manipulator, so both these systems have to be calibrated. Typically the calibration involves analytically modeling the properties of each subsystem (i.e. cameras, visual feature extraction, robot and robot controllers, as well as the geometry of the work cell), and composing a complete system model from all the part models [Craig, 1989]. In addition to being very tedious to do the modeling, this leads to a system inflexible to any changes in the system or the environment.

Outside of carefully engineered industrial environments however, precise descriptions of the environment, objects, and manipulations, with millimeter accuracy are almost impossible to obtain. This is where the conventional calibrated approach to robotics fails.

What makes the human able to effortlessly move and perform manipulation in natural, non-engineered environments where robotics fail? With some simplification, one can say that the main difference between robot manipulation and human manipulation is that, while robotic manipulation tends to rely maximally on pre-programmed models and minimally on sensory information, the human need rely very little on a-priori models, and typically relies maximally on sensory information.

In this thesis, we develop an approach to vision based robotic manipulation that, unlike traditional robotics, but like the human, needs minimal or no a-priori modeling, and makes maximum use of (typically redundant) visual sensory data. The sensory data is used both to carry out the manipulations and to learn the required visual-motor manipulation models. Some of the advantages of our approach over conventional approaches are:

  1. Our approach works in unstructured and time changing environments by continuously sensing the environment.
  2. Redundant (extra) sensory information is used but unlike conventional systems incurs no extra cost in modeling, and little extra computational cost.
  3. Using vision as the primary sensing and control modality, the robotic system is programmed (in terms of visual alignments) in a frame that the human operator (literarly) can see. One can say that for servo control purposes the robotic system sees the world much like the human. This makes it easy to construct natural man-machine interfaces.
  4. The manipulations are both specified in visual space and carried out using visual servoing. This avoids calibration errors from having different coordinate frames for the task specification and servo control.
  5. Manipulations can be (and typically are) specified in tool or object frames without needing any extra calibration or coordinate transformations.

next up previous contents
Next: 1.2 Sources of Inspiration Up: 1 Introduction Previous: 1 Introduction

Martin Jägersand