Abstract
Recurrent neural networks (RNNs) have shown an ability to model temporal dependencies. However, the problem of exploding or vanishing gradients has limited their application. In recent years, long short-term memory RNNs (LSTM RNNs) have been proposed to solve this problem and have achieved excellent results. Bidirectional LSTM (BLSTM), which uses both preceding and following context, has shown particularly good performance. However, the computational requirements of BLSTM approaches are quite heavy, even when implemented efficiently with GPU-based high performance computers. In addition, because the output of LSTM units is bounded, there is often still a vanishing gradient issue over multiple layers. The large size of LSTM networks makes them susceptible to overfitting problems. In this work, we combine local bidirectional architecture, a new recurrent unit, gated recurrent units (GRU), and residual architectures to address the above problems. Experiments are conducted on the benchmark datasets released under the IARPA Babel Program. The proposed models achieve 3 to 10% relative improvements over their corresponding DNN or LSTM baselines across seven language collections. In addition, the new models accelerate learning speed by a factor of more than 1.6 compared to conventional BLSTM models. By using these approaches, we achieve good results in the IARPA Babel Program.
Original language | English |
---|---|
Article number | 6 |
Journal | Eurasip Journal on Audio, Speech, and Music Processing |
Volume | 2018 |
Issue number | 1 |
DOIs | |
State | Published - Dec 1 2018 |
Bibliographical note
Publisher Copyright:© 2018, The Author(s).
Keywords
- Gated recurrent units
- Low resource speech recognition
- Recurrent architectures
ASJC Scopus subject areas
- Acoustics and Ultrasonics
- Electrical and Electronic Engineering