Translations:ImageNet Classification with Deep CNNs/11/en

    From Marovi AI
    Revision as of 21:39, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)

    The ReLU activation function was a critical innovation. Compared to the saturating nonlinearities (sigmoid, tanh) standard at the time, ReLU enabled training to converge approximately six times faster on the same architecture: