Translations:Batch Normalization Accelerating Deep Network Training/2/en

    From Marovi AI

    Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift is a 2015 paper by Ioffe and Szegedy from Google that introduced batch normalization (BatchNorm), a technique for normalizing layer inputs during neural network training. By reducing what the authors termed internal covariate shift — the change in the distribution of network activations as parameters are updated — batch normalization allowed the use of much higher learning rates, reduced sensitivity to initialization, and in some cases acted as a regularizer, eliminating the need for dropout.