This paper presents a novel approach to language model adaptation for speech recognition. We define mutual information histograms which account for different semantic and syntactic relations between words in text data. We introduce a novel word distance measure which is based on mutual information histograms. By using this measure we were able to create linguistically meaningful word clusters composed of words obtained in first-pass speech recognition. Words included in the clusters were used to adapt language models. Adapted language models were used for a second pass of speech recognition. We conducted experiments on the Fisher speech corpus of telephone conversations. Mutual information histograms for word pairs were estimated from the Fisher data as well as from data extracted from a corpus of New York Times articles. Results showed that word clusters conveyed significant information and could be helpful in improving speech recognition accuracy.
Boosting of Speech Recognition Performance by Language Model Adaptation
2007-03-01
272242 byte
Conference paper
Electronic Resource
English
Speech Recognition for Japanese Spoken Language
British Library Conference Proceedings | 1994
|Language Processing for Chinese Speech Recognition
British Library Conference Proceedings | 1994
|Speaker Adaptation Through Spectral Transformation for HMM Based Speech Recognition
British Library Conference Proceedings | 1994
|