next up previous contents
Next: 7.1 Overview of Task Up: On-line Estimation of Visual-Motor Previous: 3 DOF World Coordinate

7 Visual Space Task Representation, Planning and Control

    To date, work in image space visual control has demonstrated low level servoing behaviors, achieving a single visual alignment, e.g. [Hosoda and Asada, 1994, Hager et al., 1995]. A principal remaining challenge is how to specify complex tasks in visual space, divide them up into achievable subtasks, plan trajectories in visual space, and select different primitive visual servoing behaviors and visual goals. We suggest a semi-automated way of solving these high level problems, providing an image based programming interface, as shown in Fig. 7.1. The user specifies the changes he wishes to bring about in the world by clicking on the objects, and pointing out their desired locations, and alignment features. If this is done interactively we have a very low bandwidth tele-assistance system [Pook and Ballard, 1994], isolating the user from the difficult low level control problems. When it is done off line, we have a user friendly programming interface.

   figure1377
Figure 7.1: Vision based programming interface.





Martin Jägersand