To date, work in image space visual control has demonstrated low level servoing behaviors, achieving a single visual alignment, e.g. [Hosoda and Asada, 1994, Hager et al., 1995]. A principal remaining challenge is how to specify complex tasks in visual space, divide them up into achievable subtasks, plan trajectories in visual space, and select different primitive visual servoing behaviors and visual goals. We suggest a semi-automated way of solving these high level problems, providing an image based programming interface, as shown in Fig. 7.1. The user specifies the changes he wishes to bring about in the world by clicking on the objects, and pointing out their desired locations, and alignment features. If this is done interactively we have a very low bandwidth tele-assistance system [Pook and Ballard, 1994], isolating the user from the difficult low level control problems. When it is done off line, we have a user friendly programming interface.
Figure 7.1: Vision based programming interface.