Building Speech Recognition Models for TRIPS

Contents:

1. Obtaining the training data
1.1. Collecting initial data
1.2. Building an 'artificial' grammar
1.3. Generating an 'artificial' corpus
1.4. Using class definitions to generate 'artificial' corpus
2. Building language models
2.1. Configuration
2.1.1. Data files and directories
2.1.2. The configuration file
2.2. The makefile
2.3. lm-ez*
2.3.1 Options
2.3.2 Examples
2.4 Creating native Sphinx class based models
2.4.1 Class definition file
2.4.2 Generating 'artificial' corpus and inclass probability model
3. Configuring Sphinx
3.1 lm.ctl
4. Related information
5. Notes

 

1. Obtaining the training data

1.1 Collecting initial data

1.2 Building an 'artificial' grammar

1.3 Generating an 'artificial' corpus

1.4 Using class definitions to generate 'artificial' corpus

gramdump takes an argument to specify a generic class definition file as follows. This is necessary to generate native Sphinx class based models.

gramdump -classes -g my_grammar

The format of the class definition file is as follows:

<lm_classes>
<class name="CLASSNAME">
<member>MEMBER1</member>
<member>MEMBER2</member>
...
<subclass>CLASSNAME1</subclass>
<subclass>CLASSNAME2</subclass>
...
</class>
<class name="CLASSNAME1">
...
</class>
</lm_classes>

where the above is equivalent to the artificial corpus definition:

<CLASSNAME> ::= MEMBER1 | MEMBER2 | <CLASSNAME1> | <CLASSNAME2>

 

2. Building models

2.1 Configuration

2.1.1 Data files and directories
Most directories are created automatically by the tools used for building the models. For the destination folders it is possible to specify in the command line what they should be. See the Tools Manual for details. The data files are generally found in /u/trains/lm/domains. This folder contains a subfolder for every domain, plus a subfolder called general, that contains files that are TRIPS-dependent rather than domain-dependent.
The data used for phonetic and language modelling comprise corpora, phonetic dictionaries, and, optionally, tag dictionaries, lists of compound words, lists of words and/or tags used for class-based adaptation (see below), grammars for building artificial corpora, etc. It will become clear what kind of data is needed and where it should be stored in the following section. Typical sub-directories found in the domain folders are:
2.1.2 The configuration file
The configuration file provides a convenient way of defining the parameters and the environment for building new SR models. It includes definitions for several variables that specify the type of models being built and the data being used. Practically, to build SR models it is generally enough to create a good configuration file -- all the rest can be done automatically. Here is a list of commonly used variables: In principle other variables that appear in the makefile (see next section) can be set in the configuration file; however, doing so is not recommended.

2.2 The makefile

The process of building language and pronunciation models for TRIPS is driven by one, fairly large, makefile. It also includes support for perplexity evaluation. For full control over the process, one should use the gmake utility directly. Alternatively, the lm-ez* script could be used, which is more transparent and provides some extra functionality.

2.3 lm-ez*

lm-ez* is a Perl script that  is in fact a wrapper for the makefile, but with the advantage of a simpler command line syntax. In addition, it checks the results of individual steps in building the models, and prompts the user with detailed information when certain decisions need to be made, for example,  how to deal with incomplete tag dictionaries, or with incomplete pronunciation dictionaries. The user has the choice to correct things manually, or can let the script take care of the problems it encounters. The script can also build several models at once, and prompts the user to decide whether to build interpolated models as well.
One particularly useful feature is that it may build models from a new text and combine the models obtained with some given base models; this allows the tool to be used for static adaptation. The models are combined by linear interpolation, and therefore the user will be prompted to provide interpolation weights. Based on past experience, we found the following sets of weights to produce good results:
2.3.1 Options
-m model Model name (required).
-d lmdir Destination directory. If not set, it will be model by default.
-t text Use text  as training corpus.
-c config_file Configuration file.
-b basemodel basedic Base language and pronunciation models.
-l logdir Log directory.
-w Build word-based models.
-k Build class-based models.
-v Perform tag dictionary check (look for missing words).
-n Clean temporar files after finishing building the models.
2.3.2 Examples
The following two examples:
example% lm-ez -d lm -m monroe -l logs -w \
-c /u/trains/lm/domains/monroe/config/monroe.config

example% lm-ez -d lm -m monroe -w

both do the same thing: build a word-based model for the monroe domain in the folder lm. In the second example, the log directory and the configuration file will receive default values, equal to the corresponding values set explicitly in the first example.

2.4 Creating native Sphinx class based models

The technique presented above uses tagdics to build word based models which smooth probabilities based on classes. An alternative technique is to generate class based models and a second file which defines word probabilities within each class. This has a number of advantages, specifically it reduces an O(n2) search space of bigram probabilities to O(2n). But it requires some additional steps during LM building.

2.4.1 Class definition file

You will need to create a class definition file. The format is defined in 1.4 above. This can be generated from an artificial grammar using the program grammar2class as follows.

grammar2class -g <grammarfile> > my_classes

You may need to edit this file to add or remove classes you want to use.

Of course you can always create your own class file by hand.

2.4.2 Generating 'artificial' corpus and inclass probability model

Now when running gramdump you will need to specify the class file you created above.

gramdump -classes my_classes -g my_grammar -n 10000 > my_dump

Since this is a class based model (not a word based model) you will need to generate the in class probability model for sphinx. This can be accomplished using classdump and dump2corpus in two steps.

classdump -c my_classes -sphinx > my_inclass

classdump will create a uniform probability model for words in classes for Sphinx. You will next need to run dump2corpus with the -c option to update the probabilities to reflect the distribution in the dump file, or your tagged corpus.

dump2corpus -d my_dump -c my_classes > my_corpus

Not to make things too complicated, but you have a number of options when using classdump and dump2corpus above to control class assignment. Sphinx and lm-ez expect that a word can only belong to one class. Because the class definition and artificial grammars allow words to belong to multiple classes (some nested).

By default, every word is given it's own lexical entry where the class it was found in is attached. In this way every word gets a unique entry for every class it appears in. For example, in the medadvisor domain, the drug Aspirin is a non-prescription drug, which belongs to the superclass of drugs, and drugs belong to the superclass of substances. So all of the following would appear in the medadvisor model if no options are used when running classdump and gramdump.

ASPIRIN/[DRUG_NON_PRESCRIPTION]
ASPIRIN/[DRUG_NON_PRESCRIPTION]/[DRUG]
ASPIRIN/[DRUG_NON_PRESCRIPTION]/[SUBSTANCE]

lm-ez removes the trailing class tags and obtains the correct pronunciation regardless of the redundant entries.

Two other possibilities exist when tagging these words. We can only use the highest class (e.g. Aspirin is in the class [SUBSTANCE]) or we can use the lowest classes (e.g. Aspirin is in the class [DRUG_NON_PRESCRIPTION]). To specifiy the former we add the -top option when running dump2corpus and classdump.

classdump -c my_classes -sphinx -top > my_inclass
dump2corpus -d my_dump -sphinxclass my_inclass -top > my_corpus

For the aspirin example, this generates the following entries.

ASPIRIN/[DRUG]
ASPIRIN/[SUBSTANCE]

Alternatively, we may want every class to be expanded into all it's constituents.

classdump -c my_classes -sphinx -bottom > my_inclass
dump2corpus -d my_dump -sphinxclass my_inclass -bottom > my_corpus

For the aspirin example, this generates the following entries.

ASPIRIN/[DRUG_NON_PRESCRIPTION]

3. Configuring Sphinx

Once you have your ngram model and dictionary after running lm-ez on the class based corpus you generated above, you need to configure sphinx to read these files.

3.1 lm.ctl

Sphinx can be instructed using commandline arguments what ngram model and dictionary to use at runtime.

sphinx2-continuous -dictfn <dictionaryfile> -lmfn <ngramfile>

Alternatively, you can use a control file, which we usually name lm.ctl

The control file is organized as follows.

{ <inclass_lm> }
<ngramfile> <lmname>
{ <CLASSNAME1> <CLASSNAME2> ... }

The lines in curly braces are option lines specifying an inclass model and the classes from the model which will be used by the named LM. If you are not using Sphinx native class based models (see 2.4 above) then you don't need to specify those lines in lm.ctl

You can also specify multiple models with different names. For example,

pacifica/lm.bigram pacifica
monroe/lm.bigram monroe
medadvisor/lm.bigram medadvisor
medadvisor-demo/lm.bigram medadvisor-demo
test/lm.bigram test

 

4. Related information

The CMU Pronouncing Dictionary (current version: 0.6)
The CMU-Cambridge Statistical Language Modeling toolkit (current version: 2.05)
 
 
 

5. Notes

1. Here we use the convention to add a star (*) to the name of all executables, to make them more readily distinguishable among the many data files that appear throughout the text.
2. In all examples $DOMAIN represents the name of the domain for which models are built, and should be replaced with this name in order for the commands to work. Alternatively, before trying the sample commands the user may want to setenv $DOMAIN to the name of the current domain.