Monday, April 23, 2001
11:00 AM
CSB 209
Michael Collins
AT&T Labs Research
Statistical Models for Natural Language Parsing
This talk will discuss the problem of machine learning applied to natural language parsing: given a set of example sentence/tree pairs, the task is to learn a function from sentences to trees which generalizes well to new sentences.

In the first part of the talk I will review recent work on probabilistic, history-based approaches. Much of the recent success of these methods has been due to the incorporation of lexically conditioned parameters. I will discuss the importance of head words and dependency parameters, and also the use of estimation methods such as decision trees or maximum entropy methods.

While history-based models have several advantages, it can be awkward to encode some constraints within this framework. It is often easy to think of features which might be useful in discriminating between candidate trees for a sentence, but much more difficult to alter the model to take these features into account. In the second part of the talk I will review more recent work on learning methods which promise to be considerably more flexible in incorporating features. I will discuss how three such approaches -- boosting, support vector machines and markov random fields -- can be applied to parsing, and the similarities and relative advantages of the three approaches.