Tuesday, April 22, 2003
11:00 AM
CSB 209
Daniel Gildea
U. Pennsylvania
Statistical Approaches to Natural Language Semantics
Understanding human language is a central problem in artificial intelligence, important for allowing more natural human-computer interaction and for allowing systems to access and synthesize the vast amount of textual information available online. Statistical approaches have been very successful at natural language processing tasks such as syntactic parsing, chunking and part of speech tagging, but are only beginning to be applied to problems involving deeper representations of meaning. This talk will present machine learning approaches to natural language understanding and machine translation.

In the area of understanding, systems which extract meaning from text have generally focused on narrow application domains. I will examine the problem of generality, presenting systems trained on human semantic annotations from two broad-coverage corpora, Propbank and FrameNet. Motivated by the limited availability of hand-annotated text, I will discuss methods of generalizing to new domains and unseen vocabulary by making use of unlabeled data.

Accurate translation between human languages remains a difficult problem for similar reasons; sentences in the target language can be thought of as another type of semantic representation to be automatically acquired. Current approaches to machine translation are divided between the symbolic approach, which analyzes the source language into an interlingua and then generates text in the target language, and the statistical approach, which trains systems directly on large amounts of parallel text in the two languages. Recent results will be presented for a hybrid approach, using syntactic representations to augment statistical models of machine translation, while at the same time allowing for an inexact match between structures in the two languages.