Batch Normalization Preconditioning for Neural Network Training

Susanna Lange, Kyle Helfrich, Qiang Ye

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Batch normalization (BN) is a popular and ubiquitous method in deep learning that has been shown to decrease training time and improve generalization performance of neural networks. Despite its success, BN is not theoretically well understood. It is not suitable for use with very small mini-batch sizes or online learning. In this paper, we propose a new method called Batch Normalization Preconditioning (BNP). Instead of applying normalization explicitly through a batch normalization layer as is done in BN, BNP applies normalization by conditioning the parameter gradients directly during training. This is designed to improve the Hessian matrix of the loss function and hence convergence during training. One benefit is that BNP is not constrained on the mini-batch size and works in the online learning setting. Furthermore, its connection to BN provides theoretical insights on how BN improves training and how BN is applied to special architectures such as convolutional neural networks. For a theoretical foundation, we also present a novel Hessian condition number based convergence theory for a locally convex but not strongconvex loss, which is applicable to networks with a scale-invariant property.

Original languageEnglish
Pages (from-to)1-41
Number of pages41
JournalJournal of Machine Learning Research
Volume23
StatePublished - 2022

Bibliographical note

Funding Information:
This research was supported in part by NSF under the grants DMS-1620082 and DMS-1821144. We would like to thank three anonymous referees for many constructive comments and suggestions that have significantly improved the paper. We would also like to thank the University of Kentucky Center for Computational Sciences and Information Technology Services Research Computing for their support and use of the Lipscomb Compute Cluster and associated research computing resources.

Publisher Copyright:
© 2022 Susanna Lange, Kyle Helfrich, and Qiang Ye.

Keywords

  • Batch Normalization
  • Convolutional neural networks
  • Deep neural networks
  • Preconditioning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Batch Normalization Preconditioning for Neural Network Training'. Together they form a unique fingerprint.

Cite this