NFNets: High-Performance Large-Scale Image Recognition Without Normalization (ML Paper Explained)
Yannic Kilcher
#nfnets #deepmind #machinelearning
Batch Normalization is a core component of modern deep learning. It enables training at higher batch sizes, prevents mean shift, provides implicit regularization, and allows networks to reach higher performance than without. However, BatchNorm also has disadvantages, such as its dependence on batch size and its computational overhead, especially in distributed settings. Normalizer-Free Networks, developed at Google DeepMind, are a class of CNNs that achieve state-of-the-art classification accuracy on ImageNet without batch normalization. This is achieved by using adaptive gradient clipping (AGC), combined with a number of improvements in general network architecture. The resulting networks train faster, are more accurate, and provide better transfer learning performance. Code is provided in Jax.
OUTLINE: 0:00 - Intro & Overview 2:40 - What's the problem with BatchNorm? 11:00 - Paper contribution Overview 13:30 - Beneficial properties of BatchNorm 15:30 - Previous work: NF-ResNets 18:15 - Adaptive Gradient Clipping 21:40 - AGC and large batch size 23:30 - AGC induces implicit dependence between training samples 28:30 - Are BatchNorm's problems solved? 30:00 - Network architecture improvements 31:10 - Comparison to EfficientNet 33:00 - Conclusion & Comments
Paper: https://arxiv.org/abs/2102.06171 Code: https://github.com/deepmind/deepmind-research/tree/master/nfnets
My Video on BatchNorm: https://www.youtube.com/watch?v=OioFONrSETc My Video on ResNets: https://www.youtube.com/watch?v=GWt6Fu05voI
ERRATA (from Lucas Beyer): "I believe you missed the main concern with "batch cheating". It's for losses that act on the full batch, as opposed to on each sample individually. For example, triplet in FaceNet or n-pairs in CLIP. BN allows for "shortcut" solution to loss. See also BatchReNorm paper."
Abstract: Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the test accuracies of the best batch-normalized networks, and are often unstable for large learning rates or strong data augmentations. In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on ImageNet while being up to 8.7x faster to train, and our largest models attain a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free models attain significantly better performance than their batch-normalized counterpa ... https://www.youtube.com/watch?v=rNkHjZtH0RQ
193612923 Bytes