Computer Science 248/448
Due Fri Sep 4, 5pm
Due Fri Sep 18, 5pm
- Write a Perl program to count the number occurrences of each word in the three files
Your program should be able to output the counts sorted by frequency and alphabetically.
- Plot rank vs. frequency for each corpus. How closely do they follow Zipf's law?
- Calculate and plot frequencies for all bigrams in each file.
- What is similar and different across the corpora, both in terms of the curves and
the frequent words/bigrams themselves?
Due Wed Oct 14, 5pm
- Implement Viterbi and posterior decoding for a POS HMM using the
data in /u/cs448/data/hw2 for training and testing. Which does better
in per-word and per-sentence accuracy?
- Experiment with Dirichlet smoothing parameters - do they improve
results, and what is a good value for alpha?
Due Wed Nov 18, 5pm
- Train a trigram language model with Good-Turing Katz backoff using
the SRILM ngram-count
command installed under /p/nl/tools/srilm.
- Write code to read the language model file (see the
file format documentation)
and compute perplexity of
new sentences. This involves applying backoff weights for unseen trigrams.
Report the perplexity of test data from homework 2 using a language model
trained on the training data from homework 2.
- Implement lattice rescoring - lattice to be supplied soon...
Your assignment is to implement IBM Model 1.
You will train parameters
using Expectation Maximization on
a parallel French-English corpus, and evaluating
the results on held-out test data in terms of
model perplexity. In particular, your implementation
- Insertion from the dummy NULL token
- Many-to-one mappings in one direction
- Calculation of perplexity on training and test data
- Output of parameters in human-readable format
Training data can be found here: /u/cs448/data/hw4/.
This directory contains parallel French-English
text from the Canadian Parliament. Both
sides (French and English) have been run through
a tokenizer to split off punctuation from
You should floor all probabilities to a low
number, say 1e-07, to avoid numerical problems as
well as dead-ends in the EM training. Similarly,
you may need to prune low-valued parameters in
order to make memory usage and file sizes manageable.
Please turn in:
- your source code
- a graph of perplexity over ten training iterations on training and test data
- some discussion of the translation table learned:
examples of good and bad lexical pairs, what sort
of problems might be "fooling" the algorithm, or
whatever strikes your eye.
This is a big data set, and training
is time- and memory-intensive.
gildea @ cs rochester edu
November 10, 2009