4 Visual space task representation, planning and control



next up previous
Next: 4.1 Representation levels in Up: Adaptive DVFB for Uncal. Previous: 3.5 Effects of Over-determined

4 Visual space task representation, planning and control

  
Figure 11: The general structure of a traditional visual control system, and the ``short cut'' of the visual servoing approach. Adapted from Corke 94

In the past visual feedback control has been promoted primarily as a method for bypassing hard calibration problems related to robotics in general and visually guided robotics in particular. The situation is graphically represented in 11. The present work retains some of the above flavor, and we present results involving the visual control of real devices with more degrees of freedom, and more complex kinematics (e.g. highly elastic beams) than has previously been reported. However, we want to shift the main focus to emphasize instead the suitability of the visual (and other sensory) reference frames for task specification, particularly when compared to Cartesian world coordinate frames.

A traditional robot has its program stored in terms of a sequence of points expressed in world (or sometimes joint) coordinates. Using this sequence as a program yields behavior that is completely inflexible to changes in the environment. In order to be adaptive to different environments a system needs perceptions of the environment. The traditional bare robot perceives only its joint configuration, which is an exceedingly sparse representation of most aspects of the environment, and one that has little relevance to many important tasks.

To be relevant, a representation should efficiently encode the goals of a task in terms that can be realistically derived from available sensors. Those goals can be visual in nature, for instance, the appearance of the final product is an overall goal in the case of an assembly task. They can also be formed from other sensory signals, for example, the torque with which a bolt is tightened (and where it is inserted) may be a goal for a subtask of the assembly task. The crucial observation is that goals are more naturally coded as patterns in sensory signals, (perceptions) than as motor positional configurations.

In order to to achieve such goals, we need either explicit transformations between the sensor spaces and motor control space, or control strategies which can adapt to provide direct control on very general classes of sensory inputs. The former approach often requires excessive amounts of calibration. We adopt the latter, and extend the differential visual feedback controller developed in the previous sections to carry out general visually specified tasks. We call the resulting system a perceptual action strategy, emphasizing that we control motor actions directly from high level perceptions.





next up previous
Next: 4.1 Representation levels in Up: Adaptive DVFB for Uncal. Previous: 3.5 Effects of Over-determined



jag@cs.rochester.edu