[Mirror pages:
Europe ,
USA
]
[Other reserach:
Visual Robot Control,
Saliency and Attention
]
Visual Simulation of Articulated Agents
Synthesized image sequences are generated in a new and untraditional way. No prior models or explicit 3D information are used.
Instead an appearance based visual-motor model of the robot is learned while the real PUMA robot is running. In simulation mode robot motor commands in
joint space are transformed into image sequences using the previously
learned visual-motor
model, but with the ability to show arbitrary robot poses, and do completely different trajectories than during the learning phase.
A paper describing our method in detail is availible either in html or in postscript:
Videos of agent simulations from the demo page
- Human arm
Off-line visual simulation of a human arm based on a model learned during an
unrelated training sequence. (237 k)
- On-Line On-Line simulation
can fill in missing images in a video stream, or as here, generate
images simulating small hypothetical moves (twiddles) from the
current physical state.(55k)
- 3D robot arm simulation Off-line
simulation allows generation of movies from large movements based on
a better, but also more computationally intensive, learned visual-motor
model. (111k)
-
Utah/MIT hand
manipulations
This is a simulation of fingertip manipulations.
Object is a wooden block 2x4x3 inches. Manipulations
are simulated in approximate (partially calibrated)
3D Cartesian coordinates. (120k)
The view synthesis and simulation work is related to our work in
visual robot control .
A brief description of this can be found in:
[Other reserach:
Visual Robot Control,
Saliency and Attention
]
Back to Martin J's home page
To Robot Simulation main page (local acess only for now)