RNN language model with word clustering and class-based output layer

Yongzhe Shi, Wei Qiang Zhang, Jia Liu, Michael T. Johnson

Research output: Contribution to journalArticlepeer-review

28 Scopus citations


The recurrent neural network language model (RNNLM) has shown significant promise for statistical language modeling. In this work, a new class-based output layer method is introduced to further improve the RNNLM. In this method, word class information is incorporated into the output layer by utilizing the Brown clustering algorithm to estimate a class-based language model. Experimental results show that the new output layer with word clustering not only improves the convergence obviously but also reduces the perplexity and word error rate in large vocabulary continuous speech recognition.

Original languageEnglish
Article number22
JournalEurasip Journal on Audio, Speech, and Music Processing
Issue number1
StatePublished - 2013

Bibliographical note

Funding Information:
This work was supported by the National Natural Science Foundation of China under grant nos. 61273268, 61005019 and 90920302, and in part by Beijing Natural Science Foundation Program under grant no. KZ201110005005.


  • Brown word clustering
  • RNN language model
  • Speech recognition

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering


Dive into the research topics of 'RNN language model with word clustering and class-based output layer'. Together they form a unique fingerprint.

Cite this