The representations used in a perceptual action visual specification and control system using visual goals are very different from those in a traditional control system centered around the world coordinate reference frame. The central representations in a perceptual action, system as shown in fig. 12 are perception or feature vectors y. From one perspective, that of the feedback controller, these are just vectors of real numbers. From a task perspective, however, the vectors need to capture the parts of the system state we are interested in manipulating, and be well conditioned with respect to the task we are performing. In other words the perceptual goals have to be good attractors.
One way of looking at the situation is to view robot control as an
optimization problem of transforming the initial perceptual state,
denoted
to the goal state
.
In the perceptual
action paradigm, the perception vector space is both the space of
specification, and the space in which the controller operates.
Visual servoing, is an attempt to solve the optimization problem
by gradient descent in this space.
Closing of the feedback loop allows servoing to zero perceptual
visual error without accurate calibration needed in a traditional
system servoing only in robot joint space. The convergence of the
algorithm clearly depends on the qualities of the measurements.
Figure 12: The representation levels in a Perceptual Action system. The
main level is the Perception vectors. Goals are specified as
perception vectors, and when moving they represent all state
information relevant for attaining the goal
The perception vectors
contains all the
necessary information to describe the task. The adaptive visual
feedback controller we presented in the previous sections,
without initial knowledge about
, provides
an effective method of finding a control signal sequence
, that efficiently
takes the
system through the states
, while at the same time
learning about
.
In the important case of a perception vector based only on vision, each
perception vector
corresponds to a measure in some image
. The idea is that
is chosen to be
a fingerprint of
. The fingerprint should be unique, but only
in a very weak sense. It should assign the value
only to
images of world configurations which fulfill our goal, and values
close to
only to images which are close to the goal.
Thus if we have a sequence of goals we can think of the sequence
as a sequence of goal
images, although in reality they are measurements of the image.
This is the key to efficient teaching of the system, since producing a set
of images for a task, and letting the vision system automatically
extract the measurements
is
considerably easier than the traditional trajectory specification method
used in conventional robot control.
A large number of different visual measures have been proposed for visual feedback, including line length, projected area, area or line length ratios [Weiss, Sand 87], and Fourier descriptors [Feddema Mitchell 89]. Mitch Harris in [Harris 93] defines a more general ``visual similarity'' or ``closeness'' function based on orientation, length, and distance between line segments. Our philosophy has been to use the simplest measures that suffice for the task at hand.
We tried various measures for our perception vectors. As it turned out,
very little structural
sophistication was needed to obtain perceptual measures that permitted
very good performance for a range of positional tasks. The image
locations of tracked object features in different cameras (without
correspondence) provided satisfactory perceptual features for all the
experiments in this paper. The dimension
of our image space
then depends on the number of features tracked, one
feature in one camera giving two scalar values in a
-vector.
The task specifications took one of two forms. Either they specified the
desired image locations of various tracked feature points in various
cameras, or they specified pairs of tracked feature points that were
to be brought into some sort of correspondence with each other
(e.g. bringing a rod end to a hole). This latter specification can be
used in applications involving a moving camera or an autonomously
moving object.
At a higher level, a perceptual action system can utilize a sequence
of goal perceptions (feature vectors if we are using vision), each
being a subgoal in a task, thus implementing a cycle precondition
perception
action
perception
, where perception
is a goal perception.
This can be seen as an extension of the precondition perception
action used in many subsumption architecture
systems [Brooks 91][Brooks 86][Connell 91], where a process is
watching sensory inputs for a precondition perception, and when a
matching perception occurs, an open loop, canned action is carried
out. Our approach extends on the subsumption idea so that the action
is no longer
open loop, but under continuous monitoring in perceptual space
attempting to reach a specific perceptual goal.