In this dissertation, we use a mixture of methods from Computational Linguistics and Psycholinguistics to: 1) create richly annotated corpora ideally suited for exploring incremental interpretation and production; 2) develop a conversational dialog architecture that aims to interpret language incrementally, and, 3) understand how speakers translate a message into language, particularly across clauses.
Most computational models developed in Natural Language Processing are essentially off-line models. Parsing proceeds sentence-by-sentence, so that interpretation only becomes available after an entire sentence is processed. These models are incompatible with psycholinguistic findings. We present a proof-of-concept method that incorporates semantic expectations into parsing as an utterance unfolds.
Language production research beyond the clausal level is an area that has so far received relatively little attention, due in part to the complexities involved in approaching the question experimentally. We test current language production theories against our corpora. We find that speakers choose syntactic structures that optimize computational resources with respect to resource limitation account. Yet this account is also in conflict with evidence that shows that language production proceeds incrementally. We extend this account using information density, resulting in a more pyschologically realistic model of language production.