Natural language is a compelling modality for controlling complex systems such as robots, with its promise of powerful, intuitive interaction. However, robustly understanding language from untrained users is a challenging problem. In this talk I describe a probabilistic approach to understanding natural language commands given to robots. The framework, called Generalized Grounding Graphs, defines a probabilistic graphical model that maps between constituents in the language and objects and actions in the external world. The framework learns models for the meanings of complex verbs such as "put" and "take," as well as spatial relations such as "on" and "to." The model allows efficient inference and learning by using the compositional structure of a natural language command to factor the distribution over interpretations. This factorization enables it to compose learned word meanings and understand novel commands that have never been previously encountered. The system is trained and evaluated using parallel corpora of language paired with robot actions collected using crowd sourcing. Grounding graphs are a first step towards robots that can robustly interact with a human partner using natural language.
Bio: Stefanie Tellex is a Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory at MIT. She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs. She has published at SIGIR, HRI, AAAI, IROS, and ICMI, winning Best Student Paper at SIGIR and ICMI. Her research interests include probabilistic graphical models, human-robot interaction, and grounded language understanding.