Christopher M. Brown, URCS Faculty Member

b. 1945. Ph.D. (1972) University of Chicago. Research Fellow, Department of Machine Intelligence, University of Edinburgh (72-75). Visiting Assistant Professor, Computer Science and Visiting Research Associate, Production Automation Project (75), Assistant Professor (75-82), Associate Professor (83-90), Professor (90-present), Department Chair (83-87); University of Rochester. Co-author of Computer Vision (82).

Chris Brown's current activities center around building behaving, visually guided systems. Much of the effort is carried out jointly with his colleagues Nelson and Ballard, and in collaboration with the planning research group (Allen, Schubert, and Kyburg) and the Systems group (LeBlanc, Scott, and Li).

The vision work spans a range of topics from traditional symbolic artificial intelligence to low-level aspects of control and includes assembling hardware and software support systems.

Our work on gaze control has explored the cooperation of different control mechanisms (for example, those that track objects and those that verge the eyes), both to improve tracking performance and to render early visual operations more robust in dynamic scenes.

Learning is a basic capability that relieves the AI programmer from the impossible burden of anticipating all possible situations. Learning is a very active topic at Rochester. Some work on learning control structures, skilled action sequences, and three-dimensional object representations is described in the November 1991 issue of the International Journal of Computer Vision, which is devoted to vision research at Rochester. More recent work involves learning control strategies for skilled sensori-motor tasks.

Invoking "learning" can be a crutch, however; one needs to be technically careful about what is meant by the word and what phenomena one is trying to explain or produce using learning techniques. The "Vision, Learning, and Development Tech. Report looks at the problem of visual development to assess the adequacy or necessity of current learning models to explain the acquisition of visual abilities.

The high-level, cognitive control of scarce visual resources (such as deciding where to point a narrow-angle camera in a wide-angle scene) is a vital topic for automatic systems, which (like humans) have no hope of visually analyzing everything continuously. We are using Bayesian probability theory and decision analysis to develop a theory of selective visual attention and task-oriented vision, and using it to direct our binocular robot head (student Rimey). Also probabilistic reasoning is used for interpreting consumer photos, in joint work with Eastman Kodak Company (Singhal, Boutell).

Augmented reality (mixing real-time graphics with live video) raises several technical problems, and over the past decade we've been working on aspects of this area (see students Vallino and Hu).

The UR undergraduate robotics team is one of several undegraduate research projects in CS. The roboteers won in Edmonton with no help from me, but I'm the official sponsor and associated academic guy so I get to point to them.

Collaboration with forward-looking industry continues with AppleAid Inc., a local research-oriented firm doing robotics (we're working on navigation and obstacle avoidance), and most recently with the thesis of Chris Eveland (Dec. 2002), who has been working at Equinox Corp. on special sensors (polarization and long- and medium-wave IR).

For more on what's really going on, see students' individual pages.

Brown Home URCS Faculty URCS Home