Bayesian decision theory on three-layer neural networks

Yoshifusa Ito, Cidambi Srinivasan

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


We discuss the Bayesian decision theory on neural networks. In the two-category case where the state-conditional probabilities are normal, a three-layer neural network having d hidden layer units can approximate the posterior probability in Lp (Rd,p), where d is the dimension of the space of observables. We extend this result to multicategory cases. Then, the number of the hidden layer units must be increased, but can be bounded by 1/2 d (d + 1) irrespective of the number of categories if the neural network has direct connections between the input and output layers. In the case where the state-conditional probability is one of familiar probability distributions such as binomial, multinomial, Poisson, negative binomial distributions and so on, a two-layer neural network can approximate the posterior probability.

Original languageEnglish
Pages (from-to)209-228
Number of pages20
Issue numberSPEC. ISS.
StatePublished - Jan 2005


  • Approximation
  • Bayesian decision
  • Direct connection
  • Layered neural network
  • Logistic transform

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence


Dive into the research topics of 'Bayesian decision theory on three-layer neural networks'. Together they form a unique fingerprint.

Cite this