This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.
|Number of pages||12|
|Journal||IEEE Transactions on Neural Networks and Learning Systems|
|State||Published - Feb 1 2015|
Bibliographical notePublisher Copyright:
© 2014 IEEE.
- Convex function
- Karmarkar's projective scaling condition
- l loss function
- message passing algorithm (MPA)
- minimization-majorization (MM)
ASJC Scopus subject areas
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence