Next: 1 Introduction
Up: On-line Estimation of Visual-Motor
Previous: List of Tables
- The general structure of a traditional visual control system,
and the ``short cut'' of the visual servoing approach. Adapted from
Corke 1994.
- A hand-eye system has three typically concurrent processes:
model learning or estimation, motor control and visual prediction.
The visual front end serves to reduce the size of the visual space
from the dimensionality of the input image (typically
or
) to a smaller more tractable size (in our experiments
from 4 to 400 dimensions). The resulting visual representation
in appearance or perception vectors is related to
the motor representation using estimated visual-motor models.
- Exchangeability of parts in a system with on-line
visual-motor model acquisition. A manipulation task can be solved on
systems with different manipulators and visual front ends, and in
different physical environments without
changing the task
specification or high level model acquisition and application modules
- Typical visual control setup uses two cameras placed so that
they can observe the workspace from two different
viewpoints. Placement
is arbitrary, and the controller has no prior knowledge of
the camera locations, their relation to the robot, or the robot
kinematics. The set-points for the robot joint angles are specified by the
vector
, and visual perceptions, or features, are represented by the
vector
, These are related by an initially unknown transfer
function
.
- Planning intermediate way points in visual space reduces
trajectory error compared to pure servoing on the goal.
- Visual control setup using two cameras.
- Visual feedback system with inner loop joint controllers
- Time frames in a visual cycle.
- Structure of the adaptive differential visual feedback
control algorithm, and the time
frames in which each part runs. Arrows between modules indicate the
direction of information transfer.
- Experimental setup for controller experiments. Above: Initial
configuration of the robot as seen in the left and right cameras. A
visual goal near the floor is given to the algorithm. Below: Goal
configuration achieved by the controller. The image space trajectory
of the tracked features is overlaid.
- Distribution of repeatability positioning errors. Left:
closed loop visual feedback.
Right: joint feedback only.
- Results from 50 runs of an experiment with a disturbed internal
model in the controller. Convergence for an adaptive and
a non adaptive 3 DOF controller (left). Average final endpoint
error per feature in
feature space (right) for
16 tracked features and a 6 controlled DOF problem (right).
- Left: Visually planned path. Right: Typical oscillatory
path of non-adaptive controller when model accuracy is near the
convergence limit.
- Left: 250 runs varying both error in the visual goal
specification and the number of visual features used, shows that,
particularly for imprecise visual specifications, having redundant
features helps. Right: A similar experiment, with errors from imprecise
tracking caused by placing the cameras very far away, shows a similar result.
- Carrying out the foam folding task
- Examples of manipulation of a grasped object using a Utah/MIT hand.
- Setup testing repeatability of hand fine manipulation using
the Utah/MIT hand.
- Distribution of translational positioning errors for joint
and visual feedback positioning of the Utah/MIT hand.
- Overview of appearance based view synthesis.
- Synthesized views of joint 1 rotation based on a disparity image
measurement. Left Small extrapolation, well within extrapolating
range. Middle: Borderline (6 degree) extrapolation, where errors
become evident. Right: Excessive extrapolation, disparity
algorithm breaks.
- Synthesis based on an eigen-image representation.
- Reproduction quality with different visual-motor model
estimates. Left: Linear through cubic terms,
middle: only linear, right: mostly quadratic and cubic terms.
- Six most significant filters (eigen-images) learned for
the human arm movement.
- Using the linear model to synthesize a few small deviations
(``twiddles'') from the real physical state in the bottom center image.
- Instantaneous robot parameters simulated for a move of a PUMA 760
- 5 out of 30 synthesized views generated from the execution of
the same command that generated the previous figure.
- Image quality resulting from using between 9 and 299
eigen-images to represent a single robot pose in the training set.
- Simulation of an articulated PUMA robot here controlled in 3 DOF
world space in a
meter cubic workspace
- Vision based programming interface.
- The representation levels in a vision based control system
- Overall structure of our system, and the time
frames for running each part.
- Left: Planning the different phases of an insert type
movement consisting of reaching and fine manipulation
movements. Right: Performing the planned insertion. Video 1
- The physical setup in the puzzle demonstration. A Puma robot
equipped with a magnet on a stick is used to pick up, move and insert
different shaped puzzle pieces into a box with correspondingly shaped
holes. The robot actions are viewed by two uncalibrated cameras,
giving visual input to a visual space planner and visual feedback
controller
- Different modes of control used in the puzzle solving
manipulations. These images are taken from the controller's right
camera. The white lines show
controller visual trajectory plans for carrying out the manipulation.
(Video 2 and Video 3).
- Top left: Initial position showing eye-in-hand, two-camera
configuration. Following 4 images: Moving in over light bulb to pick up,
and preshaping hand for grasp. Pick up bulb and transport to socket.
Screw in bulb using the Utah/MIT hand. Last 4 images:
The corresponding sequence
from the controller's eye-in-hand camera. Video 6
Martin Jägersand