Translations:ImageNet Classification with Deep CNNs/11/en

    From Marovi AI
    Revision as of 00:32, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    The ReLU activation function was a critical innovation. Compared to the saturating nonlinearities (sigmoid, tanh) standard at the time, ReLU enabled training to converge approximately six times faster on the same architecture: