next up previous contents
Next: 6.6 On-line Linear View Up: 6 Appearance Based View Previous: 6.4 Comparison: Subspace and

6.5 Evaluating the Estimated Visual-Motor Model of a Human Arm

 


Simulation of a human arm. Simulated movie: Video 1

 

front of a cameragif. For a human we don't have access to the (neural) motor control signals, so the movement was parameterized with respect to the camera space position of a black marker on the hand. This allows us to parameterize human arm movement in the image plane during training, and later synthesize (animate) arbitrary arm movements in the image plane. Figure 6.4 shows image frames tex2html_wrap_inline4364 from the movie sequence Video 1. In the movie notice that our simulation technique captures not only the two different motions of the upper and fore arm, but also regenerates the correlated non-rigid deformation (wrinkling) of the shirt. However, the small head movements were not correlated with the marker position, and thus show up as random motion (noise) in the synthesis stage.

To study the model approximation quality we varied the weight vector tex2html_wrap_inline4322 in (6.2). As a baseline we use the estimation with tex2html_wrap_inline4368 (corresponding to linear, quadratic and cubic terms in (6.2)). Including only the linear term ( tex2html_wrap_inline4370 ), yields a smoother varying approximation tex2html_wrap_inline4342 , and makes the displayed images slightly more blurry, but the desired arm positions are reproduced accurately. Figure 6.5 illustrates this by showing the synthesis of the same arm pose with three different tex2html_wrap_inline4322 . Artificial ``sharpening'' of the image can be achieved by suppressing the linear term, but at the expense of position accuracy. In the right frame of Fig. 6.5 notice how the fingers of the hand are geometrically distorted. Also when comparing the three movies Video 1, Video 2 and Video 3 corresponding to the three different tex2html_wrap_inline4322 in Fig. 6.5 one can see that while tex2html_wrap_inline4368 and tex2html_wrap_inline4370 reproduces straight line trajectories for the hand, tex2html_wrap_inline4382 has a small but noticeable motion jitter from inaccurate positioning.

   figure1239
Figure 6.5: Reproduction quality with different visual-motor model estimates. Left: Linear through cubic terms, middle: only linear, right: mostly quadratic and cubic terms.

In previously published research [Turk and Pentland, 1991] on linear subspace methods for recognition, the eigen-images U in eq. 4.3 look something like the objects they represent (hence the terms ``eigen objects'' and ``eigen faces'' in face recognition). In our application we apply the subspace methods in a different way, not representing different objects or faces in the same location, but representing the same object (human or robot arm) during different motions. The resulting eigenvectors do not look like human or robot arms, but look like motion filters, with maximum gradient along the most common normal (edge) motion direction. Figure 6.5 shows the six most significant eigenvectors.

   figure1250
Figure 6.6: Six most significant filters (eigen-images) learned for the human arm movement.


next up previous contents
Next: 6.6 On-line Linear View Up: 6 Appearance Based View Previous: 6.4 Comparison: Subspace and

Martin Jägersand