4.2 The extended control system for visual manipulation



next up previous
Next: 5 Experiments using visual Up: 4 Visual space task Previous: 4.1 Representation levels in

4.2 The extended control system for visual manipulation

  
Figure 13: Overall structure of our system, and the time frames each part runs in. Arrows between modules indicate direction of information transfer.

The bare visual feedback controller can be used directly to obtain alignment between features as long as these features are visible. In some cases, however, the initial features may disappear during the course of the manipulation. In the case of object insertion, for example, the object typically moves out of view. In such circumstances, the visual goals directing the operation may not refer directly to the features defining the operation, but to other features whose appearance is functionally related to the defining features. For example, during an insertion operation, the bottom of the object being inserted is no longer visible. However, a gripper is connected (often rigidly) to the inserted object, and hence the configuration of the inserted object can be monitored indirectly by monitoring the configuration of the gripper. Such transitions in the representation are dictated by the high-level planner. However, it turns out to be useful to define local, visual coordinate systems that specify directions such as ``down'' or ``into the hole''. These local visual coordinate systems can then be used both to generate the virtual sketches needed for indirect control, and to define Canned motor actions, which are essentially open-loop strategies, pre-defined, in either world vision or robot (joint) space, for effecting some immediate goal such as insertion.

4.2.1 Visual goal generation

Obtaining the visual goals representing relevant goals in a task is a significant issue. Most of our experiments to date, have been performed with the system in a semi-autonomous, or tele-assistance mode, which means that the controlling visual goals are generated with the assistance of an operator. Currently we point out visual goals either by using a mouse in a picture of the scene, or by showing the system an image of the desired pose, and having it extract the features forming the desired perception vector. We have found that supplying real pictures of the desired goal pose yields very accurate positioning, as it assures that the desired pose is actually physically possible. Here we view the role of the human as that of a vision subroutine, pointing out image locations of the objects requested. In many industrial applications automatic machine vision systems can be used to perform this task.

Currently, we hard code the sequencing of visual goals for each task type. For example in the puzzle demo discussed later in this paper, the high-level sequence for insertion of a puzzle piece (move to piece, pick it up, move near goal, perform insertion, release piece) is provided a-priori. An alternative, still short of full autonomy, would be to learn the high-level description using sensory data taken while a human performs the task, but this requires (sub) task segmentation, which has proved to be a hard problem. Another is to use task level reasoning to figure out how to solve the problem, however this is still some distance in the future.

Our system can also be used as a high level, user friendly, telemanipulation system, having the user specify and sequence what is to be done by pointing out objects to be manipulated in images of the telemanipulation scene, and tagging them with a label describing what is to be done to the object. The manipulations are then carried out autonomously. This significantly lowers the bandwidth in the human-machine interaction, and decouples the human from the kinematics of the actual manipulator used. The above approach can be compared to a system in which human movements are used directly to drive robot joints. This kind of system requires the manipulator to be near anthropomorphic, or requires a significant learning effort on the part of the operator to figure out the manipulator kinematic and dynamics [Pook, Ballard 94].

4.2.2 Trajectory planning

Visual space trajectory planning actually takes place on two levels. The high level visual space trajectory planner is task dependent, and deals with making an overall plan of how to get from initial configuration to the goal , while avoiding obstacles, staying out of singularities, keeping within the convergence range of the controller etc. This planner basically supplies a sequence of visual goals representing important transition points in the task We will describe high level planners for several tasks in the experiments section.

The visual goals provided by the high-level planner may not, however, be adequate for trajectory generation, especially for long paths. The main problem is in cases where the Jacobian changes substantially over the course of the path. The Low level visual space trajectory generator described in section 2 further decomposes the trajectory into segments of a length suitable for the differential visual feedback controller.



next up previous
Next: 5 Experiments using visual Up: 4 Visual space task Previous: 4.1 Representation levels in



jag@cs.rochester.edu