Stability of neural networks for slightly perturbed training data sets

Indrias Berhane, C. Srinivasan

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In learning models of artificial neural networks, that randomness comes from the distribution of the training data. We show individual observations do not affect excessively for a neutral network modeling, provided that it has adequate nodes on the hidden layer and proves that the empirical error of a neural network with p number of weights converges to the expected error when p/ m → 0 where m is the size of the perturbed training data.

Original languageEnglish
Pages (from-to)2259-2270
Number of pages12
JournalCommunications in Statistics - Theory and Methods
Volume33
Issue number9 SPEC.ISS.
DOIs
StatePublished - Sep 2004

Keywords

  • Artificial neural networks
  • Stability

ASJC Scopus subject areas

  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Stability of neural networks for slightly perturbed training data sets'. Together they form a unique fingerprint.

Cite this