TY - GEN
T1 - A neural network having fewer inner constants to be trained and Bayesian decision
AU - Ito, Yoshifusa
AU - Srinivasan, Cidambi
AU - Izumi, Hiroyuki
PY - 2007
Y1 - 2007
N2 - The number of constants in a neural network, such as connection weights and threshold, to be trained may decide directly the complexity of its learning space and, consequently, impact the learning process. It is also probable that the locations of the constants are related to the complexity. In addition, a constant to be trained at the first step of the BP learning may not add to the complexity of the learning space in comparison to those to be trained at the later steps. This paper, reflecting the above perspective, proposes a one-hidden-layer neural network with less complex learning space compared to that of ordinary one-hidden-layer neural networks. In particular, we construct a one-hidden-layer neural network having fewer constants to be trained, most of which are trained at the first step of the BP training. The network has more hidden-layer units than the required minimum for approximation but the number of constants to be trained is smaller. The goal of the network is to overcome the difficulties during statistical learning with dichotomous random teacher signals. As an example, we apply it to the approximation of a Bayesian discriminant function.
AB - The number of constants in a neural network, such as connection weights and threshold, to be trained may decide directly the complexity of its learning space and, consequently, impact the learning process. It is also probable that the locations of the constants are related to the complexity. In addition, a constant to be trained at the first step of the BP learning may not add to the complexity of the learning space in comparison to those to be trained at the later steps. This paper, reflecting the above perspective, proposes a one-hidden-layer neural network with less complex learning space compared to that of ordinary one-hidden-layer neural networks. In particular, we construct a one-hidden-layer neural network having fewer constants to be trained, most of which are trained at the first step of the BP training. The network has more hidden-layer units than the required minimum for approximation but the number of constants to be trained is smaller. The goal of the network is to overcome the difficulties during statistical learning with dichotomous random teacher signals. As an example, we apply it to the approximation of a Bayesian discriminant function.
UR - http://www.scopus.com/inward/record.url?scp=51749087272&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=51749087272&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2007.4371437
DO - 10.1109/IJCNN.2007.4371437
M3 - Conference contribution
AN - SCOPUS:51749087272
SN - 142441380X
SN - 9781424413805
T3 - IEEE International Conference on Neural Networks - Conference Proceedings
SP - 2993
EP - 2998
BT - The 2007 International Joint Conference on Neural Networks, IJCNN 2007 Conference Proceedings
T2 - 2007 International Joint Conference on Neural Networks, IJCNN 2007
Y2 - 12 August 2007 through 17 August 2007
ER -