Monday, February 21, 2011
11:00 AM
Computer Studies Bldg. 209
Ruslan Salakhutdinov
Massachusetts Institute of Technology
Learning Hierarchical Generative Models
Building intelligent systems that are capable of extracting meaningful representations from high-dimensional data lies at the core of solving many Artificial Intelligence tasks, including visual object recognition, information retrieval, speech perception, and language understanding. My approach to discovering high-level representations focuses on learning rich generative models with deep hierarchical structure that support inferences at multiple levels.

In this talk, I will introduce a broad class of probabilistic generative models called Deep Boltzmann Machines (DBMs), and a new algorithm for learning these models that uses variational methods and Markov chain Monte Carlo. I will show that DBMs can learn useful hierarchical representations from large volumes of high-dimensional data, and they can be successfully applied in many domains, including information retrieval, object recognition, and nonlinear dimensionality reduction. I will then describe new ways of developing more complex probabilistic models that combine Deep Boltzmann Machines with structured hierarchical Bayesian models. I will show how this new class of models can learn a deep hierarchical structure for sharing knowledge across hundreds of visual categories, which allows accurate learning of novel visual concepts from few examples.

Bio: Ruslan Salakhutdinov received his PhD in computer science from the University of Toronto in 2009, and he is now a postdoctoral associate at CSAIL and the Department of Brain and Cognitive Sciences at MIT. His broad research interests lie in machine learning, computational statistics, and large-scale optimization. He is the recipient of the NSERC Postdoctoral Fellowship and Canada Graduate Scholarship.

Refreshments will be provided at 11:00 AM