Next: 9 Future Work
Up: 8 Conclusions
Previous: 8.3 Image based View
Research in uncalibrated visual servoing has been going on for more
than a decade.
In [Feddema, 1994] Feddema asks why visual servoing has not yet been adopted in
industry. One reason is that up to now work has concentrated on the
low level servoing, i.e. obtaining a single visual alignment, and has not,
in general, addressed the interaction of vision and action
on a higher level (eg., how to specify tasks in a visual
frame, and compose different low level servoing
behaviors in order to bring about some task level goal).
In this paper we have presented three main conceptual and two practical
contributions towards making visual servo control and visual frames
useful for solving complex tasks (Chapter 7):
- Task coding and planning is done in a visual
space
,
rather than in the robot motor or 3D world space. - When composing tasks we switch between different types of low
level servoing, controlling different DOF's, depending on the nature
of the subtask being solved.
- Local viewing geometry models, obtained during the course of a
manipulation, are used to specify parts of the task not
describable directly in terms of visual alignments.
- The product of our research is a vision-based interface for
uncalibrated tele-assistance [Pook and Ballard, 1994] and vision based robot programming.
- We have demonstrated this system by solving several
complex real world tasks in unstructured environments.
Martin Jägersand