Resumen
In this chapter, we study the generalization performance of min-norm overfitting solutions for the neural tangent kernel (NTK) model of a two-layer neural network with ReLU activation that has no bias term. We show that, depending on the ground-truth function, the test error of overfitted NTK models exhibits characteristics that are different from the "double-descent" of other overparameterized linear models with simple Fourier or Gaussian features. Specifically, for a class of learnable functions, we derive a new upper bound of the generalization error that approaches a small limiting value, even when the number of neurons p approaches infinity. This limiting value further decreases with the number of training samples n. For functions outside of this class, we provide a lower bound on the generalization error that does not diminish to zero even when n and p are both large.
| Idioma original | English |
|---|---|
| Título de la publicación alojada | Artificial Intelligence for Edge Computing |
| Páginas | 111-135 |
| Número de páginas | 25 |
| ISBN (versión digital) | 9783031407871 |
| DOI | |
| Estado | Published - dic 21 2023 |
Nota bibliográfica
Publisher Copyright:© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023. All rights reserved.
ASJC Scopus subject areas
- General Computer Science
- General Engineering