TY - GEN
T1 - A new algorithm for learning mahalanobis discriminant functions by a neural network
AU - Ito, Yoshifusa
AU - Izumi, Hiroyuki
AU - Srinivasan, Cidambi
PY - 2011
Y1 - 2011
N2 - It is well known that a neural network can learn Bayesian discriminant functions. In the two-category normal-distribution case, a shift by a constant of the logit transform of the network output approximates a corresponding Mahalanobis discriminant function [7]. In [10], we have proposed an algorithm for estimating the constant, but it requires the network to be trained twice, in one of which the teacher signals must be shifted by the mean vectors. In this paper, we propose a more efficient algorithm for estimating the constant with which the network is trained only once.
AB - It is well known that a neural network can learn Bayesian discriminant functions. In the two-category normal-distribution case, a shift by a constant of the logit transform of the network output approximates a corresponding Mahalanobis discriminant function [7]. In [10], we have proposed an algorithm for estimating the constant, but it requires the network to be trained twice, in one of which the teacher signals must be shifted by the mean vectors. In this paper, we propose a more efficient algorithm for estimating the constant with which the network is trained only once.
UR - http://www.scopus.com/inward/record.url?scp=81855218298&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=81855218298&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-24958-7_69
DO - 10.1007/978-3-642-24958-7_69
M3 - Conference contribution
AN - SCOPUS:81855218298
SN - 9783642249570
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 596
EP - 605
BT - Neural Information Processing - 18th International Conference, ICONIP 2011, Proceedings
T2 - 18th International Conference on Neural Information Processing, ICONIP 2011
Y2 - 13 November 2011 through 17 November 2011
ER -