Translations:Neural Networks/18/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) Tag: Manual revert |
||
| (2 intermediate revisions by the same user not shown) | |||
| Line 11: | Line 11: | ||
| '''Leaky ReLU''' || <math>\max(\alpha z, z)</math> for small <math>\alpha > 0</math> || (−∞, ∞) || Addresses the dead-neuron problem | | '''Leaky ReLU''' || <math>\max(\alpha z, z)</math> for small <math>\alpha > 0</math> || (−∞, ∞) || Addresses the dead-neuron problem | ||
|- | |- | ||
| ''' | | '''{{Term|softmax}}''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || Used in output layer for multi-class classification | ||
|} | |} | ||
Latest revision as of 23:34, 27 April 2026
| Function | Formula | Range | Notes |
|---|---|---|---|
| Sigmoid | $ \sigma(z) = \frac{1}{1+e^{-z}} $ | (0, 1) | Historically popular; suffers from vanishing gradients |
| Tanh | $ \tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}} $ | (−1, 1) | Zero-centred; still saturates for large inputs |
| ReLU | $ \max(0, z) $ | [0, ∞) | Default choice in modern networks; can cause "dead neurons" |
| Leaky ReLU | $ \max(\alpha z, z) $ for small $ \alpha > 0 $ | (−∞, ∞) | Addresses the dead-neuron problem |
| softmax | $ \frac{e^{z_i}}{\sum_j e^{z_j}} $ | (0, 1) | Used in output layer for multi-class classification |