Resumen
The recurrent neural network language model (RNNLM) has shown significant promise for statistical language modeling. In this work, a new class-based output layer method is introduced to further improve the RNNLM. In this method, word class information is incorporated into the output layer by utilizing the Brown clustering algorithm to estimate a class-based language model. Experimental results show that the new output layer with word clustering not only improves the convergence obviously but also reduces the perplexity and word error rate in large vocabulary continuous speech recognition.
| Idioma original | English |
|---|---|
| Número de artículo | 22 |
| Publicación | Eurasip Journal on Audio, Speech, and Music Processing |
| Volumen | 2013 |
| N.º | 1 |
| DOI | |
| Estado | Published - 2013 |
Nota bibliográfica
Funding Information:This work was supported by the National Natural Science Foundation of China under grant nos. 61273268, 61005019 and 90920302, and in part by Beijing Natural Science Foundation Program under grant no. KZ201110005005.
Financiación
This work was supported by the National Natural Science Foundation of China under grant nos. 61273268, 61005019 and 90920302, and in part by Beijing Natural Science Foundation Program under grant no. KZ201110005005.
| Financiadores | Número del financiador |
|---|---|
| Natural Science Foundation of Beijing Municipality | KZ201110005005 |
| Natural Science Foundation of Beijing Municipality | |
| National Natural Science Foundation of China (NSFC) | 61273268, 61005019, 90920302 |
| National Natural Science Foundation of China (NSFC) |
ASJC Scopus subject areas
- Acoustics and Ultrasonics
- Electrical and Electronic Engineering