Our approach integrates three main functions into the hand-eye manipulation system. The three functions are model learning, movement and visual simulation. These functions run as concurrent processes. A unique aspect of our system is that all three are based on the same visual and motor signals, and use the same type of model. Figure 1.2 illustrates the flow of signals for the different functions.
In model learning, both image and proprioceptive motor sensory data are measured, and a visual-motor model is estimated on-line. The model estimation lets the system work in an unstructured environment where no a-priori model is available. It also adjusts the visual-motor model to track a (relatively slowly) changing environment. Combined with visual space task decomposition as described in section 7.3 it allows the control frame to be changed between high-DOF object, or tool centered frames useful for fine manipulation, and low-DOF frames useful for transportation.
Visual servoing is used to bring about movement. The visual servoing needs only a sparse minimum model. It is initialized with just one (guessed or estimated) Jacobian. During movement the model estimation procedure is used to acquire a more accurate and dense model, and manipulation performance thus improves with experience.
Using the visual-motor model in the other direction, the visual outcome of a motor command can be simulated. Intensity images can be generated and shown to the robot operator using an invertible, visual front-end, or the simulated appearance vectors can be used as predictors to improve tracking performance. This can be done either on-line, while the robot is executing a task, or off-line. The off-line case corresponds to visualizing a task mentally, while the on-line version is more useful to combat inherent control problems such as time delays.
Figure 1.2: A hand-eye system has three typically concurrent processes:
model learning or estimation, motor control and visual prediction.
The visual front end serves to reduce the size of the visual space
from the dimensionality of the input image (typically
or
) to a smaller more tractable size (in our experiments
from 4 to 400 dimensions). The resulting visual representation
in appearance or perception vectors is related to
the motor representation using estimated visual-motor models.
The model learning, movement and visual simulation are integrated in such a way that they mutually benefit each other. The whole system is centered around the common representation for the visual-motor model. The model estimation benefits from interacting with the control. The discretization step size in the controller is varied so as to always satisfy a model agreement condition between the estimated model and the real world. In addition the interaction between the process noise in the controller and the model estimation helps estimate a (reasonable) full dimensional local model even when the desired task is of lower dimension than the control space.
Local visual simulation improves performance by serving as a predictor for the control and tracking. Non-local visual simulation can aid movement planning, and improve the operator interface by letting the operator try tasks in simulation and watch an image sequence of the simulated manipulation before doing it in the real world.
The vision-based strategy for the low level described above functions allow us to make a user friendly tele-assistance man-machine interface. The central representation in these three functions is the vision-based appearance vectors. These have a direct relation to image space, and in our high level operator interface, a human can visually ``point out'' desired manipulations in a very intuitive way.
An advantage of the on-line model estimation is that the same system, without any reprogramming, is usable on many platforms, as illustrated in Fig. 1.3. The central model acquisition and application modules remain the same, while the manipulators and low level control, as well as the camera configuration and visual front-end are exchangeable. We use exactly the same algorithm for regular 3 and 6 DOF control on the PUMA arms, as for the different and much more complicated Utah/MIT 16 DOF hand control, and the manipulation of non-rigid materials by two arms in parallel. Similarly, the type of visual measures that are extracted can be changed to suit the task at hand, without affecting the control algorithm.
Figure 1.3: Exchangeability of parts in a system with on-line
visual-motor model acquisition. A manipulation task can be solved on
systems with different manipulators and visual front ends, and in
different physical environments without
changing the task
specification or high level model acquisition and application modules