Our main inspirations come primarily from four lines of work in computer vision and robotics: visual servoing, learning by watching, reactive robots and active vision.
An image based
visual
servoing system using differential visual feedback, moves robot
control from world and joint space to visual space.
This avoids conventional modeling by omitting the
world coordinate frame altogether. Fig. 1.1
illustrates this pictorially. In addition, since the vision system
provides continuous pose information, the introduction of differential
visual error feedback allows precise positioning despite slight
inaccuracies or simplifying approximations in the model.
We contribute
a novel, fully (robust) adaptive controller to this paradigm, based on
non-parametric model estimation.
Model-based work in visual teaching of a robot, as pioneered by Kuniyoshi [Kuniyoshi et al., 1992], and Ikeuchi [Ikeuchi and Suehiro, 1992]), offers an attractive way of teaching a robot by having it observe a human carrying out a task. We seek similar natural user interfaces, but in a model free framework, and without the symbolic AI based ``understanding'' part.
In tele-assistance [Pook, 1995] a robot operates partly guided by a human and partly autonomously. Tele-assistance thus fills the gap between full autonomy, and direct control by a human tele-operator. In tele-assistance, instead of directly controlling the motor actions of a robot, the human interacts with the robot using a deictic, high-level language. Pook and Ballard have implemented such a system using a hand sign language sensed by an exoskeleton [Pook and Ballard, 1994]. In this work we present a user interface in which the robot is instructed through visual pointing.
The work on small mobile robots, and the subsumption
architecture, pioneered by Brooks [Brooks, 1986, Brooks, 1991], shows that agents
can interact with a complex and changing environment without complete
models. Instead of going through a preprogrammed and inflexible sequence
of actions, each action is triggered by a precondition perception that
is
sensitive to the environment. We advance this concept by adding closed loop
control to this perceptual space, thus taking the open loop
precondition perception
action programs
into closed loop
precondition perception
action
perception
programs, where perception
is a goal
perception, which is servoed on by a closed loop controller.
In an animate [Ballard, 1990], active, or behavioral vision system, the acquisition of visual information is not an independent, open-loop process, but instead depends on the active agent's interaction with the world. Also, the information need not represent an abstract, high-level model of the world, but instead is highly specific to the task, and represented in a form that facilitates particular operations. Visual servoing, when supplemented with on-line visual model estimation, fits into the active vision paradigm.
Figure 1.1: The general structure of a traditional visual control system,
and the ``short cut'' of the visual servoing approach. Adapted from
Corke 1994.