Login
Computer Science @ Rochester
Tuesday, June 29, 2004
11:00 AM
CSB 209
Chen Yu
University of Rochester
Embodied Language Learning in Humans and Machines
This thesis addresses questions of embodiment in language learning: how language is grounded in sensorimotor experience and how language development depends on complex interactions among brain, body and environment. Most studies of human language acquisition have focused on purely linguistic input. We believe, however, that non-linguistic information, such as visual attention and body movement, also serves as an important driving force in language learning. This work presents a formal model that explores the computational role of non-linguistic information through both empirical and computational studies with the hope to get a more complete picture. We first introduce a statistical learning mechanism that provides a formal account of cross-situational observation. Then a unified model is proposed which is able to make use of different kinds of social cues, such as joint attention and prosody in speech, in the statistical learning framework. In the next experiment, we use adult subjects exposed to a second language to study the role of non-linguistic information in word learning. The results show conclusively that eye gaze is a big help in both speech segmentation and word-meaning association.

In light of the findings of human language acquisition, we develop a multimodal embodied system that learns words from natural interactions with users. The learning system is trained in an unsupervised mode in which users perform everyday tasks while providing natural language descriptions of their behaviors. The system collects acoustic signals in concert with user-centric multisensory information from non-speech modalities, such as user's perspective video, gaze positions, head directions and hand movements. A multimodal learning algorithm uses this data to first spot words from continuous speech and then associate action verbs and object names with their perceptually grounded meanings. The central ideas are to make use of non-speech contextual information to facilitate word spotting, and utilize body movements as deictic references to associate temporally co-occurring data from different modalities and build lexical items. This advent represents the first steps of an ongoing progression toward computational systems capable of human-like sensory perception.