Computer Science 248/448
Due Thu Dec 13, 2007
Due Wed Nov 28, 2007
Your assignment is to implement IBM Model 1.
You will train parameters
using Expectation Maximization on
a parallel French-English corpus, and evaluating
the results on held-out test data in terms of
model perplexity. In particular, your implementation
- Insertion from the dummy NULL token
- Many-to-one mappings in one direction
- Calculation of perplexity on training and test data
- Output of parameters in human-readable format
Training data can be found here: /u/cs448/data/hw5/.
This directory contains parallel French-English
text from the Canadian Parliament. Both
sides (French and English) have been run through
a tokenizer to split off punctuation from
You should floor all probabilities to a low
number, say 1e-07, to avoid numerical problems as
well as dead-ends in the EM training. Similarly,
you may need to prune low-valued parameters in
order to make memory usage and file sizes manageable.
Please turn in:
- your source code
- a graph of perplexity over ten training iterations on training and test data
- some discussion of the translation table learned:
examples of good and bad lexical pairs, what sort
of problems might be "fooling" the algorithm, or
whatever strikes your eye.
This is a big data set, and training
is time- and memory-intensive.
Due in class Fri Oct 26 5pm
Due in class Th Oct 11
- Implement inside-outside in perl and apply to the data in /u/cs448/data/hw4/wsj10.tags.
Plot perplexity of the model on the training data as well as held-out data from /u/cs448/data/hw4/wsj10.test.tags as a function of training iteration, as well as accuracy of unlabeled constituent boundaries.
Extra credit: implement variational bayes.
Due Fri Sep 21, 5pm
- Gold and Morgan ex 6.3, 6.4, 20.1, 20.2
Due Fri Sep 7, 5pm
- Write a Perl program to count the number occurrences of each context-free grammar rule
(parent and child labels for each node in the tree)
in the trees in /u/cs448/data/hw2/wsj. Remove indexes (such as -1) and function tags
(such as -TMP) first. Plot rank vs. frequency and fit the curve as you did for the
previous assignment. How the the curves for word and grammar rules compare?
Why do think they are the same or different?
- Implement the deleted interpolation language model described in class,
and test it on the hansard and wsj data from the previous assignment. Make
your own division into training and held-out data, and compare perplexities
within and across data sets.
- Write a Perl program to count the number occurrences of each word in the three files
Your program should be able to output the counts sorted by frequency and alphabetically.
- Plot rank vs. frequency for each corpus. How closely do they follow Zipf's law?
- Calculate and plot frequencies for all bigrams in each file.
- What is similar and different across the corpora, both in terms of the curves and
the frequent words/bigrams themselves?
gildea @ cs rochester edu
December 6, 2007