The Monroe Corpus

Home | People | Projects | Publications | Resources

General | Participants | Tasks | Setup | Dialogue stats | Processing | Transcripts and annotations

Dialogue segmentation, transcription and alignment

First, each dialogue was recorded from the DAT machine at 44.1kHz using the command narecord -s 44100 <filename>.raw. The resulting audio file had no header, so it was converted to a file with an ESPS header using btosps -f 44100 -n 2 <filename>.raw <filename>.fea, and then any blank space at the beginning and end of the files was removed. To enable us to use many of the ESPS tools (in particular, the aligner) we downsampled the audio to 16kHz using sfconvert -s16000 <filename>.fea dialog.fea. The resulting 2-channel file was separated into two single channel files using demux -e0,1 dialog.fea speaker0.fea speaker1.fea so that we could do the segmentation.

We used showbreakup to manually segment the audio into utterances. Each dialogue was segmented by one person using the guidelines in [#!heemanandallen95!#]. The segmentation breaks the audio data up not only into utterances, but also into turns separated by speaker changes. Not every speaker change results in a new turn; for instance, back-channels do not cause a turn break.

Following the segmentation, breakup was used to break the audio file into individual utterance files.

We segmented the dialogues into utterances for the following reasons:

We have found that no segmentation is perfect for every type of processing or dialogue annotation. It is frequently necessary to re-segment to some extent for different analyses. Furthermore, a lot of information is left out if one only looks at the utterances of a segmented dialogue; pauses, overlap, breaths, and non-verbal sounds may all be missing or slightly incorrect in the transcription. Listening to the entire dialogue at least once can therefore be very helpful for annotators.

Each dialogue was transcribed by hand from the audio. We used the script trtoutts to break up the text transcription into individual utterance text files matching the audio segmentation. Then, we used the ESPS aligner to align all the individual utterance files. This produced .words and .phones files for each utterance. The segmentation and alignments were later checked and corrected by another person.

After the alignment process had finished, we used dump_utts -e words '*' speaker0 speaker1 to create a .words file for each speaker. Then we used transcriptor to get a text transcript including silences from the .words files, and format.pl and raw2sgml to get the .sgml transcript file. Other scripts were used to format the transcript for other types of annotation.

The narecord program is part of the DAT machine software. The tools we used, including btosps, sfconvert, demux and the aligner, are from the ESPS/waves+ 5.3 package. Some of the scripts we used were derived from those explained in detail in [#!heemanandallen94!#], but were not identical; we had to update them. Other scripts were newly written.

We have not annotated or modified the video data in any way.

About this document ...

This document was generated using the LaTeX2HTML translator Version 98.1 release (February 19th, 1998)

Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The translation was initiated by Amanda Stent on 2001-12-11


Amanda Stent
2001-12-11