front of a camera
.
For a human we don't have access to the
(neural) motor control signals, so the movement was parameterized
with respect to the camera space position of a black marker on the
hand. This allows us to parameterize human arm movement in the
image plane during training, and later synthesize (animate)
arbitrary arm movements in the image plane. Figure 6.4
shows image frames
from the movie sequence
Video 1.
In the movie notice that our simulation
technique captures not only the two different motions of the
upper and fore arm, but also regenerates the correlated non-rigid
deformation (wrinkling) of the shirt. However, the small head
movements were not correlated with the marker position, and
thus show up as random motion (noise) in the synthesis stage.
To study the model approximation quality we varied the weight vector
in (6.2). As a baseline we use the estimation
with
(corresponding to linear, quadratic and
cubic terms in (6.2)). Including only the linear term
(
), yields a smoother varying approximation
,
and makes the displayed images slightly more blurry, but the desired
arm positions are reproduced accurately. Figure 6.5
illustrates this by showing the synthesis of the same arm pose
with three different
. Artificial ``sharpening'' of the image
can be achieved by suppressing the linear term, but at the expense
of position accuracy. In the right frame of Fig. 6.5
notice how the fingers of the hand are geometrically distorted.
Also when comparing the three movies Video
1,
Video 2 and
Video 3
corresponding to the three different
in Fig. 6.5
one can see that while
and
reproduces
straight line trajectories for the hand,
has a
small but noticeable motion jitter from inaccurate positioning.
Figure 6.5: Reproduction quality with different visual-motor model
estimates. Left: Linear through cubic terms,
middle: only linear, right: mostly quadratic and cubic terms.
In previously published research [Turk and Pentland, 1991] on linear subspace methods for recognition, the eigen-images U in eq. 4.3 look something like the objects they represent (hence the terms ``eigen objects'' and ``eigen faces'' in face recognition). In our application we apply the subspace methods in a different way, not representing different objects or faces in the same location, but representing the same object (human or robot arm) during different motions. The resulting eigenvectors do not look like human or robot arms, but look like motion filters, with maximum gradient along the most common normal (edge) motion direction. Figure 6.5 shows the six most significant eigenvectors.
Figure 6.6: Six most significant filters (eigen-images) learned for
the human arm movement.