To realize the two applications above or something similar is a long term goal that requires a big system building effort. On the way there are several smaller research problems to address.
A combined mobile platform and manipulator could be controlled by first driving the platform to the manipulation point, parking it, and then manipulating. A more interesting approach would be to make simultaneous use of the platform and manipulator DOF's. This would also allow one to build minimalist arms by using the mobile platform for the one rotational and two ground plane translational DOF's.
So far we have estimated only kinematic (static) visual-motor models. It is perfectly possible to include dynamic state variables in the method we use. This type of method would be easier to develop on more modern robots, with more accessible low level control, faster, and more interesting dynamics, than the PUMA's . Whether an integrated visual-motor kinematics and dynamics approach is better than the current two level approach remains to be seen.
Appearance-based rather than feature based visual measures are appealing because there is no need to identify special points to track. The drawback (as discussed in Chapter 6) is that for the ``eigen-image'' based filter the visual-motor model is very non-linear. The motion-based method has a relatively easy visual-motor model, but full motion is hard to extract in real time, and some kind of compression is needed to make the model estimation tractable. Modern video compression methods are based on a combination of (object) motion and appearance representations. It would be interesting to investigate if these techniques transfer to our application.
As mentioned above, an eye-tracker provides basically 2D image coordinate information. To extract the 3D specifications required for manipulation control in the man-machine interface will probably be a challenge. Another challenge is to extract only the relevant fixation points from the raw eye-tracker data. We have found that we can extract fixation points quite reliably, and that these are accurate to a few pixels using an ASL free-head eye-tracker, but a remaining (hard) problem is to segment and interpret the eye-tracks into spurious and relevant fixations.
In our visual pointing interface, the operator has to choose the appropriate tool (e.g. ``pick up cup tool'') from a menu before manipulating. In many cases the correct manipulation could be chosen based on what object has been pointed out, by using appearance based computer vision methods to recognize the object.
We have used our current method to control up to 12 simultaneous motor DOF's. In the past few years several projects have been started aimed at building robotic models ranging from human torsos to complete humans. These robots will have 50 or more DOF's. Although mathematically our methods generalize to n DOF's for any n, practically, some modifications are needed to deal with highly redundant systems. A related issue is also how to find some higher level structure mapping estimated properties to different body parts in these complex systems. (Remember that our method estimates a composite visual-motor model, and we don't know of an easy way of separating it, say into object, eye, body, and arm models.)