Unfortunately, discriminative models require a large amount of supervised training data, which may not be available when working with resource-poor languages. The training data for discriminative rerankers usually consists of actual outputs from the system as well as correct outputs provided by a human annotator. For machine translation, for instance, the reranker would be trained on a parallel corpus: a French sentence would be translated into a number of possible English sentences, which would be compared against the gold standard translation.
Confusion-based language modeling is a semisupervised method for training the reranker using only a plain monolingual corpus. In order to do this, we must simulate possible "confusions", incorrect sentences that the system would be likely to confuse with the correct output. The CLSP held a workshop on this topic last summer; this talk will describe the research that was done there.