Research
ARIGS Image Mosaicking 3-D Object Modeling

I've been doing research on Augmented Reality (AR). I took part in building an AR system targeted in Image-guided surgery applications. It was a joint effort of URCS and the Siemens Corporate Research. This dual-SGI-pc based system operates in real time and the registration error is in the sub-millimeter range. It's a fine example of a typical real life AR system.

I'm now working on improving AR rendering. Most current AR systems, including ours, concentrate on the registration problem. When it comes to rendering, they simply overlay virtual objects onto the real scene. To enhance the sense of immersing, however, some basic physical interactions between the real and virtual objects have to be simulated. For example, the real and virtual should occlude each other properly. They may cast shadows over each other. More difficult interactions include inter-reflection between the real and virtual. Substantial information of the real scene has to be known to perform simulations of all these lighting effects. We get the environment lighting by using an active camera to capture the whole environment and the 3D geometry of the scene using 3D reconstruction methods employing multiple cameras. The input is a set of pictures of the to-be-augmented scene and knowledge of the cameras, as well as the graphical descriptions of the virtual objects. The output is the same set of picture with the virtual objects inserted with convincing appearance. The virtual objects are rendered using standard CG renderer with the knowledge of the scene and the cameras. We are using Radiance, for its ability of lighting simulation. The schematic is shown below.

Schematic of the
system

The two major components of the system are lighting acquisition and 3D object modeling. To make things simpler and more controllable, we limit ourselves to indoor workspace settings. This way, the scene knowledge can be obtained more easily than general environments and the results can be evaluated more easily. We capture pictures of the whole room with a pan-tilt camera. Stitching the pictures together and with an a priori model of the room, we can create a lighting model used by the renderer. We also developed a method exploiting scene regularities present in indoor scenes to reconstruct and extract the light sources explicitly. We place the camera around the workspace and take several pictures, from which we recover the sizes,shapes and locations of the interested real objects. Currently, we are working on a shape from silhouette algorithm and toward a system integration.