Object Discovery Homepage

object-discovery-figure
Object Discovery (OD):
Given a set of observations, group all the observations stemming from a particular object without including other observations originating from different objects.


Publications

A Theory of the Quasi-static World
Abstract:
We present the theory behind a novel unsupervised method for discovering quasi-static objects, objects that are stationary during some interval of observation, within image sequences acquired by any number of uncalibrated cameras. For each pixel we generate a signature that encodes the pixel's temporal structure. Using the set of temporal signatures gathered across views, we hypothesize a global schedule of events and a small set of objects whose arrivals and departures explain the events. The paper specifies observability conditions under which the global schedule can be established and presents the QSL algorithm that generates the maximally-informative mapping of pixels' observations onto the objects they stem from. Our framework ignores distracting motion, correctly deals with complicated occlusions, and naturally groups observations across cameras. The sets of 2D masks we recover are suitable for unsupervised training and initialization of object recognition and tracking systems.
The OD Theory of TOD: The Use and Limits of Temporal Information for Object Discovery
Abstract:
We present the theory behind TOD (the Temporal Object Discoverer), a novel unsupervised system that uses only temporal information to discover objects across image sequences acquired by any number of uncalibrated cameras. The process is divided into three phases: (1) Extraction of each pixel's temporal signature, a partition of the pixel's observations into sets that stem from different objects; (2) Construction of a global schedule that explains the signatures in terms of the lifetimes of a set of quasi-static objects; (3) Mapping of each pixel's observations to objects in the schedule according to the pixel's temporal signature. Our Global Scheduling (GSched) algorithm provably constructs a valid and complete global schedule when certain observability criteria are met. Our Quasi-Static Labeling (QSL) algorithm uses the schedule created by GSched to produce the maximally-informative mapping of each pixel's observations onto the objects they stem from. Using GSched and QSL, TOD ignores distracting motion, correctly deals with complicated occlusions, and naturally groups observations across cameras. The sets of 2D masks recovered are suitable for unsupervised training and initialization of object recognition and tracking systems.
Note:
The URCS technical report contains additional details and proofs not in the AAAI paper.
Unsupervised Discovery of Objects Using Temporal Coherence
Abstract:
We present a novel unsupervised method for discovering objects in image sequences. Instead of using spatial homogeneity to partition pixels into regions, we group pixels into large temporally coherent clusters (TCCs) having a unique temporal signature. Each cluster's temporal signature is explained by hypothesizing the arrival and departure of a small set of objects. Explanations of less ambiguous clusters are used to disambiguate the explanations of their more complicated neighbors. The recovered objects and arrival/departure events explain each frame in the image sequence by specifying (1) which objects are present, (2) the ways in which they occlude each other, and (3) the pixels in the frame that are not part of the background or the objects.

Our framework ignores distracting motion, correctly deals with occlusion (including mutual occlusion), and recovers entire objects even in cases where they are partially occluded in every frame. Because we do not use spatial information in our clustering steps the technique is significantly different from and complements traditional spatially based segmentation algorithms. The recovered 2D object masks are suitable for unsupervised training and initialization of object recognition and tracking systems.
Note:
The CVPR technical sketch is an abridged version of the CRL technical report.


PDF Caveat: All .pdfs are PDF-1.2, viewing and printing do not work with old versions of ghostscript and acrobat. Additionally, acrobat reader 4.0 has a bug in it that causes certain characters (e.g., - sign) to be absent when the file is printed even though they display properly in the reader. Subsequent versions of acrobat reader have corrected this bug.


Email:
sanders@cs.rochester.edu

Last change: January 8, 2002
HOME
URCS Home Page