All translations

Enter a message name below to show all available translations.

Message

Found 3 translations.

NameCurrent message text
 h English (en){| class="wikitable"
|-
! Function !! Formula !! Range !! Notes
|-
| '''Sigmoid''' || <math>\sigma(z) = \frac{1}{1+e^{-z}}</math> || (0, 1) || Historically popular; suffers from vanishing gradients
|-
| '''Tanh''' || <math>\tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}</math> || (−1, 1) || Zero-centred; still saturates for large inputs
|-
| '''ReLU''' || <math>\max(0, z)</math> || [0, ∞) || Default choice in modern networks; can cause "dead neurons"
|-
| '''Leaky ReLU''' || <math>\max(\alpha z, z)</math> for small <math>\alpha > 0</math> || (−∞, ∞) || Addresses the dead-neuron problem
|-
| '''Softmax''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || Used in output layer for multi-class classification
|}
 h Spanish (es){| class="wikitable"
|-
! Función !! Fórmula !! Rango !! Observaciones
|-
| '''Sigmoide''' || <math>\sigma(z) = \frac{1}{1+e^{-z}}</math> || (0, 1) || Históricamente popular; sufre de gradientes que se desvanecen
|-
| '''Tanh''' || <math>\tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}</math> || (−1, 1) || Centrada en cero; aún se satura para entradas grandes
|-
| '''ReLU''' || <math>\max(0, z)</math> || [0, ∞) || Opción por defecto en redes modernas; puede causar "neuronas muertas"
|-
| '''Leaky ReLU''' || <math>\max(\alpha z, z)</math> para <math>\alpha > 0</math> pequeño || (−∞, ∞) || Aborda el problema de las neuronas muertas
|-
| '''Softmax''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || Se utiliza en la capa de salida para clasificación multiclase
|}
 h Chinese (zh){| class="wikitable"
|-
! 函数 !! 公式 !! 范围 !! 备注
|-
| '''Sigmoid''' || <math>\sigma(z) = \frac{1}{1+e^{-z}}</math> || (0, 1) || 历史上很流行;存在梯度消失问题
|-
| '''Tanh''' || <math>\tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}</math> || (−1, 1) || 以零为中心;对大输入仍会饱和
|-
| '''ReLU''' || <math>\max(0, z)</math> || [0, ∞) || 现代网络的默认选择;可能导致"死神经元"
|-
| '''Leaky ReLU''' || <math>\max(\alpha z, z)</math>,其中 <math>\alpha > 0</math> 较小 || (−∞, ∞) || 解决死神经元问题
|-
| '''Softmax''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || 用于多类分类的输出层
|}