Ph.D. Thesis Proposal
After a decade of research, syntax-based statistical machine translation is beginning to
see performance on par with the best phrase-based systems. Somewhat surprisingly,
these gains have been accomplished at both ends of the linguistic spectrum: One class
of approaches makes extensive use of linguistic annotation, while the other uses only a
single generic nonterminal in a synchronous grammar framework. Common to all approaches along this spectrum, however, is the research focus on syntax-based *translation*
models, while maintaining the use of n-grams in decoding algorithms. Apart from a few
half-hearted attempts at using parsers as language models, there has been no research
in using syntax in language models for machine translation.
N-grams are widely acknowledged to be poor models of language. In this thesis
proposal, we present the case against them, and suggest that parsers will also make poor
language models for machine translation. We propose the investigation and development
of new syntax-based language models and outline a plan for accomplishing this over the
next few years.