next up previous contents
Next: 5.3 On the Visual-Motor Up: 5 Visual Servo Control Previous: 5.1 Trust Region

5.2 Visual Space Trajectory Control

 

The trust region method developed above for robot joint space also works in visual feature space tex2html_wrap_inline3998 (since they are locally linearly related by tex2html_wrap_inline4000 , and the trust region method is not dependent on which norm is used in eq. 5.3. This allows us to control image velocities, the same way we controlled joint velocities above. This is useful since the image velocities are usually more relevant to the task than the joint velocities.

More importantly, working in image feature space allows us to do image space trajectory planning. We will briefly describe this here, but as it ties in more with the vision part of a visual servoing system than the robot control we direct the reader to Chapter 7 or [Jägersand and Nelson, 1995], where we describe a method for high level visual space task specification, planning, and trajectory generation. In visual servoing, the task goals are given in image space. When solving complete tasks, the image information is also used to plan trajectories (eg. to avoid obstacles). Given initial visual state tex2html_wrap_inline3536 and goal tex2html_wrap_inline3540 a sequence of visual space subgoals, or ``way-points'' tex2html_wrap_inline4006 are generated, subject to the visual space path planning constraints. The density of these subgoals controls the maximum speed and acceleration. In order to get straight line trajectories in visual space a slight modification of the constraint in eq. 5.3 is used: tex2html_wrap_inline4008 .

Last but not least, the introduction of these intermediate way-points can help us reach the global minimum tex2html_wrap_inline3568 of tex2html_wrap_inline3878 rather than a local minimum. This is possible because the sequence of subproblems tex2html_wrap_inline4014 may very well be convex on each of the now much smaller sub domains, while not being convex over the entire domain needed for solving tex2html_wrap_inline4016 directly. Experimentally we have been able to solve some difficult manipulation problems with this technique, such as control in 12 DOF of a nonrigid link (see Section 5.9 or [Jägersand and Nelson, 1994]). In many of these problems we found that, without the visual space planning, the control would terminate in a local minimum, or in a singularity. In numerical analysis, techniques like this one are called ``inbäddning'' [Gustafsson, 1991] or homotopy methods [Garcia and Zangwill, 1981], and are used to improve the convergence range on difficult problems for a variety of numerical methods.


next up previous contents
Next: 5.3 On the Visual-Motor Up: 5 Visual Servo Control Previous: 5.1 Trust Region

Martin Jägersand