In this talk, I will introduce a broad class of probabilistic generative models called Deep Boltzmann Machines (DBMs), and a new algorithm for learning these models that uses variational methods and Markov chain Monte Carlo. I will show that DBMs can learn useful hierarchical representations from large volumes of high-dimensional data, and they can be successfully applied in many domains, including information retrieval, object recognition, and nonlinear dimensionality reduction. I will then describe new ways of developing more complex probabilistic models that combine Deep Boltzmann Machines with structured hierarchical Bayesian models. I will show how this new class of models can learn a deep hierarchical structure for sharing knowledge across hundreds of visual categories, which allows accurate learning of novel visual concepts from few examples.
Bio: Ruslan Salakhutdinov received his PhD in computer science from the University of Toronto in 2009, and he is now a postdoctoral associate at CSAIL and the Department of Brain and Cognitive Sciences at MIT. His broad research interests lie in machine learning, computational statistics, and large-scale optimization. He is the recipient of the NSERC Postdoctoral Fellowship and Canada Graduate Scholarship.
Refreshments will be provided at 11:00 AM