Translations:ImageNet Classification with Deep CNNs/11/en

    From Marovi AI

    The ReLU activation function was a critical innovation. Compared to the saturating nonlinearities (sigmoid, tanh) standard at the time, ReLU enabled training to converge approximately six times faster on the same architecture: