Login
Computer Science @ Rochester
Monday, April 29, 2002
11:00 AM
CSB 209
Ph.D. Thesis Proposal
Chen Yu
University of Rochester
Embodied Learning from Multisensory Input
Humans develop based on sensorimotor experiences with the physical environment. One vital role of human brain is to bridge the gap from embodied experience to its expression as abstract symbols. To mimic human skills, a challenge in machine intelligence is how to ground symbolic meanings from non-symbolic sensorimotor information, which is termed as the symbol grounding problem. In this work, we are exploring this problem by studying the associations between language, visual perception and body movements. We believe, understanding and modeling how we associate language with its grounding meanings is the key to understand our own minds and ultimately create artificial ones.

To appreciate the importance of embodiment, an intelligent system needs to have sensorimotor experiences by interacting with the world. Our solution is to attach multiple sensors to a real agent, so that the intelligence system can see as the agent sees, hears as the agent hears and says, and experiences the environment of the agent in a first-person sense. The function of those sensors are similar to the human sensory system and it allows the intelligent system to collect multisensory data to simulate the learning procedure in humans. Specifically, we are developing an intelligent system that can ground semantics in visual perception and action. Spoken names of objects are associated with their meanings that are perceptually grounded, and action verbs are grounded in the body movements of the agent. To achieve this goal, a multimodal learning algorithm will be developed based on the spatio-temporal constraints among co-occurring data from different modalities. As a result of the grounded word learning, the system can demonstrate its intelligence by describing visual events verbally.