Translations:ImageNet Classification with Deep CNNs/11/en
The relu activation function was a critical innovation. Compared to the saturating nonlinearities (sigmoid, tanh) standard at the time, relu enabled training to converge approximately six times faster on the same architecture: