High DOF control problems involving manipulation of non rigid objects are very hard to solve with traditional model-based robot control paradigms. The modeling of the problem can be messy or completely impractical, which makes design of a control system difficult. Our adaptive controller, on the other hand, does not need an exact a-priori model. Instead the robot learns and refines successive linear models of the complex and non-linear overall visual motor model while performing its task in the real environment.
This experiment uses two 6 DOF PUMA manipulators, one attached to
each end of a piece of flexible packing material foam. The attachment
is rigid, so the arms can exert torques as well as forces to the beam.
The object of the system is to bend or fold the beam into a specified
shape. The manipulation is specified by showing the system a sequence
of images of the desired manipulation.
The Oxford snake trackers [Curwen and Blake, 1992] we usually use for
contour tracking rely on an
affine constraint.
We could not get them to track the foam edges through
non-rigid deformations. Instead
a point representation of the foam outline is tracked using special purpose
trackers on the markers seen in Fig. 5.9
.
From a visual motor model point of view this is a hard problem,
in part because
the overall model is highly nonlinear. We also start
from a nearly singular state (nearly straight beam). It turned out
that we needed to show a sequence of at least
3 images during training in order to generate a convergent (sub)goal way point
sequence
.
Figure 5.9: Carrying out the foam folding task