Login
Computer Science @ Rochester
Tuesday, July 29, 2003
11:00 AM
CSB 209
Rahul Bhotika
University of Rochester
Scene-Space Methods for Bayesian Inference of 3D Shape and Motion
We combine scene-space based methods with Bayesian modeling for recovering the geometric (3D shape, appearance) and dynamic (motion, deformation) properties of real-world scenes from noisy images. Bayesian modeling in scene space helps establish a direct mathematical relationship between the uncertainty in estimating scene properties (e.g., 3D shape and motion) and the uncertainty due to noise and errors in image measurements. This leads to algorithms that robustly and optimally recover 3D scene properties directly from image intensities. We apply this approach to two specific problems.

The first problem we examine is inferring 3D shape from a set of noisy images. We derive a general probabilistic theory of occupancy and emptiness to specify what one can infer about 3D shape for arbitrarily-shaped, Lambertian scenes and arbitrary viewpoint configurations. By modeling the problem in scene space, we formalize the notions of visibility, occupancy, emptiness, and photo-consistency, leading to the Photo Hull Distribution, the tightest probabilistic bound on the scene's true shape that is theoretically computable from the input images. We show how to express this distribution directly in terms of image measurements and represent it compactly by assigning an occupancy probability at every 3D point. We provide a stochastic algorithm that draws fair samples from the Photo Hull Distribution and converges to an optimal conservative estimate of the occupancy probability and present experimental results for real, complex scenes.

The second problem we study is recovering nonrigid motion of deformable surfaces from noisy video. We develop linear methods for model-based tracking of nonrigid 3D objects in video. Uncertainty in image measurements is quantified and propagated through the inverse model to yield optimal 3D pose and deformation estimates directly from 2D image intensities. We obtain accurate and robust closed-form nonrigid motion estimators by minimizing information loss from non-reversible operations. We demonstrate results with 3D nonrigid tracking, model refinement, and super-resolution texture lifting from low-quality, low-resolution video.