Current spoken dialog systems operate within narrow domains, and many work by filling in slots for the information they need to achieve a specific task. Such simple systems do not require a syntactic analysis of what the user said; they can accomplish their mission by recognizing only a few key phrases. We believe that as spoken language systems become more sophisticated, they will require a more thorough analysis of the user input, rendering many current robustness strategies ineffective. By identifying implausible speech recognition hypotheses, the spoken dialog system can attempt to repair the communication breakdown, either by using stochastic methods to predict what was actually said or by adapting the dialog strategy appropriately.
We show that by describing the expected structure of spoken turns in human-computer practical dialog and formalizing the structure by means of context-free grammar rules used by a bottom-up chart parser, we can achieve 92.1% accuracy in the task of etecting erroneous speech recognizer output based solely on the chart generated during parsing, an improvement of 18.2% over the majority-class baseline. Furthermore, we can reliably locate the start index of errors within misrecognized strings using the chart and domain-specific word bigram models. We developed and implemented algorithms that use the predicted error start location together with the word bigram models, phonetic similarity and the recognized string to generate correction hypotheses.