Eigenvalue normalized recurrent neural networks for short term memory

Kyle Helfrich, Qiang Ye

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (SciVal)

Abstract

Several variants of recurrent neural networks (RNNs) with orthogonal or unitary recurrent matrices have recently been developed to mitigate the vanishing/exploding gradient problem and to model long-term dependencies of sequences. However, with the eigenvalues of the recurrent matrix on the unit circle, the recurrent state retains all input information which may unnecessarily consume model capacity. In this paper, we address this issue by proposing an architecture that expands upon an orthogonal/unitary RNN with a state that is generated by a recurrent matrix with eigenvalues in the unit disc. Any input to this state dissipates in time and is replaced with new inputs, simulating short-term memory. A gradient descent algorithm is derived for learning such a recurrent matrix. The resulting method, called the Eigenvalue Normalized RNN (ENRNN), is shown to be highly competitive in several experiments.

Original languageEnglish
Title of host publicationAAAI 2020 - 34th AAAI Conference on Artificial Intelligence
Pages4115-4122
Number of pages8
ISBN (Electronic)9781577358350
DOIs
StatePublished - 2020
Event34th AAAI Conference on Artificial Intelligence, AAAI 2020 - New York, United States
Duration: Feb 7 2020Feb 12 2020

Publication series

NameAAAI 2020 - 34th AAAI Conference on Artificial Intelligence

Conference

Conference34th AAAI Conference on Artificial Intelligence, AAAI 2020
Country/TerritoryUnited States
CityNew York
Period2/7/202/12/20

Bibliographical note

Publisher Copyright:
© 2020, Association for the Advancement of Artificial Intelligence.

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Eigenvalue normalized recurrent neural networks for short term memory'. Together they form a unique fingerprint.

Cite this