Login
Computer Science @ Rochester
Friday, February 23, 2007
12:30 PM
CSB 703
Edward Loper
University of Rochester
Encoding Structured Output Values
Many of NLP tasks that we would like to model with machine learning techniques generate structured output values, such as trees, lists, or groupings. These structured output problems can be modelled by decomposing them into a set of simpler sub-problems, with well-defined and well-constrained interdependancies between sub-problems. However, the effectiveness of this approach depends to a large degree on exactly how the problem is decomposed into sub-problems; and on how those sub-problems are divided into equivalence classes.

I will show how the notion of "output encoding" can be used to examine the effects of problem decomposition on learnability for specific tasks. These effects can be divided into two general classes: local effects and global effects. Local effects, which influence the difficulty of learning individual sub-problems, depend primarily on the coherence of the classes defined by individual output tags. Global effects, which determine the model's ability to learn long-distance dependancies, depend on the information content of the output tags. <br><br>I will also present some recent results showing that the choice of appropriate output encodings can help improve performance in tasks including NP chunking and semantic role labelling; and discuss the use of hill-climbing algorithms to automatically select an appropriate encoding for a given task. If time permits, I will also present an algorithm I have been developing for voting in structured output problems.