News & Events

Events

 

RSS

January 26, 2017, 03:30 PM
Liang Huang: Linear-Time Structure Prediction in Language and Biology

[Thursday, January 26, 2017 at 3:30 PM in Meliora 203]

Abstract:

Why are computers so bad at processing human languages while so good at programming languages? What is the key difference between English and C++ that makes the former so much harder? In this talk I'll present a linear-time (approximate) dynamic programming algorithm for incremental parsing inspired by both human sentence processing (psycholinguistics) and compiler theory (LR parsing). This algorithm, being linear-time, is much faster than, but also as accurate as, the dominant O(n^3) algorithms. It overcomes the ambiguity explosion problem by local ambiguity packing similar to those found in psycholinguistics. More interestingly, there is a striking connection between linguistics and biology: natural language parsing and RNA secondary structure prediction use the same very slow O(n^3) algorithms. While natural language sentences are rarely over 100 words, RNA sequences can be as long as 4,000 nucleotides; so there is a critical need for faster algorithms. We can therefore adapt the same linear-time dynamic programming idea to predict secondary structures for RNA sequences in linear-time, which results in orders of magnitude faster predictions without loss of accuracy.

Bio:

Liang Huang is currently an Assistant Professor of EECS at Oregon State University, and a part-time Research Scientist with IBM's Watson Group. Before that, he was Assistant Professor for three years at the City University of New York (CUNY). He graduated from Penn in 2008 and has worked as a Research Scientist at Google and a Research Assistant Professor at USC/ISI. Most of his work develops fast algorithms and provable theory to speedup large-scale natural language processing, structured machine learning, and computational structural biology. He has received a Best Paper Award at ACL 2008, a Best Paper Honorable Mention at EMNLP 2016, several best paper nominations (ACL 2007, EMNLP 2008, and ACL 2010), two Google Faculty Research Awards (2010 and 2013), a Yahoo! Faculty Research Award (2015), and a University Teaching Prize at Penn (2005). DARPA, NSF, Google, and Yahoo have supported his research. He also co-authored a best-selling textbook in China on algorithms for programming contests.

Host: Dan Gildea, daniel.gildea@rochester.edu


February 14, 2017, 12:45 PM
Stephanie Tellex: Learning Models of Language, Action and Perception for Human-Robot Collaboration

[Tuesday, February 14, 2017 at 12:45 PM in Gavette 202] Abstract: Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. To achieve complex tasks, it is essential for robots to move beyond merely interacting with people and toward collaboration, so that one person can easily and flexibly work with many autonomous robots. The aim of my research program is to create autonomous robots that collaborate with people to meet their needs by learning decision-theoretic models for communication, action, and perception. Communication for collaboration requires models of language that map between sentences and aspects of the external world. My work enables a robot to learn compositional models for word meanings that allow a robot to explicitly reason and communicate about its own uncertainty, increasing the speed and accuracy of human-robot communication. Action for collaboration requires models that match how people think and talk, because people communicate about all aspects of a robot's behavior, from low-level motion preferences (e.g., "Please fly up a few feet") to high-level requests (e.g., "Please inspect the building"). I am creating new methods for learning how to plan in very large, uncertain state-action spaces by using hierarchical abstraction. Perception for collaboration requires the robot to detect, localize, and manipulate the objects in its environment that are most important to its human collaborator. I am creating new methods for autonomously acquiring perceptual models in situ so the robot can perceive the objects most relevant to the human's goals. My unified decision-theoretic framework supports data-driven training and robust, feedback-driven human-robot collaboration. Bio: Stefanie Tellex is an Assistant Professor of Computer Science and Assistant Professor of Engineering at Brown University. Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot. She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs. Her postdoctoral work at MIT CSAIL focused on creating robots that understand natural language. She has published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky Ideas Initiative. Her awards include being named one of IEEE Spectrum's AI's 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, and a 2016 Sloan Research Fellowship. Her work has been featured in the press on National Public Radio, MIT Technology Review, Wired UK and the Smithsonian. She was named one of Wired UK's Women Who Changed Science In 2015 and listed as one of MIT Technology Review's Ten Breakthrough Technologies in 2016.


February 17, 2017, 12:45 PM
Margaret Mitchell: TBD

[Friday, February 17, 2017 at 12:45 PM in CSB 601] TBD