Corpus driven Language Processing

What is this course about?

Natural language processing involves any aspect of processing language as humans speak and write them by computer, ranging from core topics in artificial intelligence such as building conversational robots, to commercial applications such as web search engines. Language is one of the more complex humans activities in which we communicate thoughts and feelings via a sequence of sounds or symbols. For a machine to fully understand language as we do, it would need to be able to represent and identify the appropriate thoughts and feelings given examples of language in appropriate contexts. The is the long-term goal of natural language understanding.

This course has more modest aims. There are many tasks that can be performed without having to achieve understanding, but rather take advantage of the structural properties of language. These tasks are rapidly becoming integrated into our technology base and are revolutionizing the way we use computers. For instance, some web search engines do a reasonable job of finding relevant pages out of hundreds of millions of possibilities by using techniques based on word matching. Speech recognition systems do a reasonable job using techniques for classifying acoustic phenomena coupled with knowledge about word sequence probabilities (e.g., how likely one word is to follow another). We are going to explore the basic concepts and techniques that are enabling this revolution.

You will not need any prior knowledge of linguistics, and need only a general familiarity with mathematics. Often material will be presented in two parallel ways - one dealing with the intuitive concepts and developing ideas by example, and the other giving details of the mathematical development. You can gain a good general understanding of the techniques and concepts used without looking at the mathematical sections. They are provided to give more precision and detail for those who may eventually want to apply the techniques to new problems.

Applications

There are many different applications that appear amenable to corpus based approaches, including information retrieval, text retrieval, speech recognition, machine translation, document classification, message understanding and information extraction,

Methodology: Corpus based Models

There are at least two ways one might approach a problem such as, say, building a speech recognition system. In the knowledge based approach, one might study the phenomena of speech very carefully, and based on that study develop algorithms and data structures that encode our knowledge of the phenomena. We would have to encode algorithms to recognize all the different potential sounds, and algorithms to map these sounds into the appropriate words. Unfortunately, the variation in the actual acoustic data, and the scale of the problem of capturing linguistics structure, are such that we have not yet been able to build effective systems this way. The data driven approach views the problem more as pattern recognition. We design a set of measurable features (or develop algorithms to find a set of features), and then use training and learning algorithms to find ways to identify these feature from actual input. Typically, this involves building probabilistic models of the problem we are trying to solve and using large amounts of data (called a corpus or corpora). One of the first corpora developed was the Brown Corpus. Some versions of this corpus are tagged, and specify not only the words but also additional linguistic information of interest, such as the part of speech. The Brown corpus uses a set of 81 tags. One commonly used set of tags for word categories is the Penn treebank tagset, which consists of about 40 entries. If we had a corpus consisting of a million tagged words or so, we could use that information to estimate various probabilities about linguistic information. For example, we might estimate the probably that the word level is a noun (as in He is on the level), or an adjective (as in the level board), or a verb (as in The bulldozer can level the house) by simply adding up the number of times the word occurs in each category. For instance, say it occurs 2 times as a noun, 3 as an adjective, and 5 as a verb. We might estimate that the probability that the word level is a noun is 2/10 (i.e., the number of times its a noun divided by the total number of occurrences). On the other hand, we might find another probability distribution better for identifying the tag - say a conditional probability that level is a noun given what the preceding word is. For example, the probability that it is a verb when it is preceded by the word the is probably near 0, while the probability that level is a verb when it follows the word can is also near 1. We will introduce the notions of probability and estimating probabilities in the next few lectures. Once we have some probabilities associated with a corpus, we could then use them in a program that identifies the parts of speech for new data that we haven't seen before. Such a program would be called a tagger and an example of one is available online, and we could develop many different techniques for using different probability estimates in order to decide on the tags.

 

The Nature of Corpora and Zipf’s Law

The major problem facing us in building statistical models for natural language is that there are many very rare events. For instance, given a million word corpus, we expect that many words will be repeated many times, and we can use this information for estimating probabilities. To make this more precise, we distinguish between word types and word tokens.  So while a million word corpus contains a million word tokens, it typically contains many fewer word types, since words will be repeated. In news text, for instance, there would typically be about 1/6 as many types as tokens, namely 166,666 distinct word types. On average, each word occurs 6 times. To complicate matters, however, in natural language corpora, that the most common word types occur extremely frequently, and conversely, there are always an extremely large number of words that occur very rarely (i.e., only once). This is captured by Zipf’s Law, which says if we list words in rank order, starting from the most common word and listing them down to the words that occur only once, then the frequency of occurrence of the i’th rank words is inversely proportional to i.  In other words,  let f(i) be the frequency of the i’th most common words, then

               i * f(i) = C

In English, it appears that C is approximately .1 * N, where N is the size of the corpus.  Empirical experimentation reveals that Zipf’s law tends to overestimate the occurrence of the very frequent words. For example, Manning and Schutz show empirical data from Tom Sawyer: with 71,370 word tokens, there are 8,018 word types. This is a bit lower than the usually seen in practice but they attribute that to it being a children’s book. The most common word, the, occurs only 3,332 times rather than the 7137 predicted by Zipf’s law. There are 3993 words occur only once, again a bit low. But if you look at the data in the mid ranges of the word frequencies, the predictions are quite close. While the exact details of this aren’t too significant, the important point to see is that no matter how large the corpus gets, the number of words that occur only once will continue to increase! Thus a central problem for doing corpora based work is developing methods for handling rare events for which we cannot easily estimate reliable probabilities.

Training, Test and Development Corpora

How do we know whether a certain technique is effective, or better than another. In general, this is done by testing different algorithms on another corpus, called the test corpus, which is different from the original corpus that we trained the models on, which we call the training corpus. The test corpus looks like the training corpus in that it contains both the data (e.g., the words) and the required linguistic information (the tags), but when we run the algorithm we only look at the words, and then we use the supplied answers to see how well it did. It is generally not a good idea to evaluate an algorithm on the same corpus that it is trained on, because we will generally perform much better than we will on new unseen data, and so get no idea of the generality of the methods. For instance, if we trained on a corpus consisting of one sentence, say

The/ART can/N rusted/V

we would get a probability distribution in which the is always an article,  can is always a noun, and rusted is always a verb. Using this distribution, we could get a perfect score testing on the sentence The can rusted. But this apparently perfect tagger would score very poorly on a new sentence such as The rusted can melted. The approach would have accuracy problems because the probabilities for rusted and can would predict the wrong tags, and coverage problems because there would be no tag predicted for the word melted, which is not in the training corpus. Thus, to get an informative evaluation, we need a separate testing corpus which has not be used in any training phase.

Sometimes it is useful to have a "test" corpus that we use to help refine the probability models after we have trained on the training corpus. This can't be the test corpus now, since we have used it for estimating probabilities, but it is not the training corpus either. This is usually called the development corpus.
 

Evaluation

When we test algorithms on a corpus we need to compute some measure of how well we are doing. The many tests, we can use a simply measure called accuracy, which is simply the number of correct answers over the number of decisions to be made.

Accuracy = (number of correct answers) / (number of test cases)

For example, in a tagging test corpus that contains 1000 words, if our algorithm got 933 correct, then the accuracy of the algorithm would be .933 or 93.3%.


In some cases, accuracy scores do not provide us with very useful information. For instance, say we were doing a research project on past participle verbs (Penn treebank tag VBN), and want to use a tagger to find them in a corpus. Let assume that only 1% of the corpus consists of past participle verb forms. This means that an algorithm that simple classifies every word as NOT a part participle verb would get 99% accuracy without using any training data whatsoever! This sounds like an impressive score, except that this algorithm identified not a single past participle, and so is useless for what we want. On the other hand, a tagger that simply said every word was a past participle would tag all the past participles correctly, but only get a 1% accuracy. It also would not be very useful. In situations like this, we need different evaluation measure. Two common measures are called precision (the number of words identified as past participle verbs that actually are) and recall (the number of past participle verbs that were found). More precisely, say we run an algorithm on a corpus to identify past participles. It will identify some correctly, identify some that are not participles, and miss some that are past participles. This can be seen in the following table:
 

 

Actually a past participle

Not a past participle

labeled as a past participle

# CORRECTLY IDENTIFIED

# WRONGLY IDENTIFIED

not labeled as past participle

# MISSED

# CORRECTLY SKIPPED

Given this,

Precision = (# CORRECTLY IDENTIFIED ) / (# CORRECTLY IDENTIFIED + # WRONGLY IDENTIFIED)

and

Recall = (# CORRECTLY IDENTIFIED ) / (# CORRECTLY IDENTIFIED + # MISSED)

For example, say our 100,000 word corpus contains 100 past participle verbs and compare the accuracy, precision and recall results for four algorithms: one that tags everything as a past participle, one that tags none, one that identifies 80 correctly, misses 20 and incorrectly identifies 120 as past participles, and one that identifies 100 past participles of which 70 are correct.
 

Model

Number correct

Number missed

Number Wrong

Accuracy

Recall

Precision

tag all

100

0

999,900

.1 %

100%

 .1 %

tag none

0

100

100

99.9%

0 %

0% (or undefined)

80 out of 200 tagged

80

20

120

99.88%

80%

40%

70 out of 100 tagged

70

30

30

99.97%

70%

70%

Which is the better model? It depends on what you want. For instance, for a program that automatically identifications and trashes junk mail, you want a very high precision so you don’t lose good mail, and are willing to live with lower recall (i.e., some junk mails are misidentified as good messages). 

Supervised and Unsupervised Learning

The examples discussed above all involved learning probability distributions or words and other information of interest from training corpora that specified both the input and what the answer should be. Thus, we can learn that the word level occurs most frequently as a verb because that it what it is mostly frequently labeled that way. In other situations, we might have the data but not have the answers available. It might seem that this is a hopeless situation, but it turns out that there are techniques for finding regularities in data and essentially creating a labeling scheme that captures these regularities. These are called unsupervised learning techniques because the algorithm must create the models without any help. For example, we might want to build a tagger for some application, but not be sure what set of tags would give us the best performance. We could use an unsupervised learning algorithm to identify a good sense of tags for our application. In general, unsupervised learning is more difficult and expensive, and the results will not rival the results from supervised learning if we have a good set of features defined and sufficient tagged training data.
 

Readings

Manning and Schutze, Chapters 1 and 4