Translations:Neural Networks/18/zh: Difference between revisions

    From Marovi AI
    (Batch translate Neural Networks unit 18 → zh)
    Tag: translation
    (Batch translate Neural Networks unit 18 → zh)
    Tag: translation
    Line 7: Line 7:
    | '''Tanh''' || <math>\tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}</math> || (−1, 1) || 以零为中心;对大输入仍会饱和
    | '''Tanh''' || <math>\tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}</math> || (−1, 1) || 以零为中心;对大输入仍会饱和
    |-
    |-
    | '''ReLU''' || <math>\max(0, z)</math> || [0, ∞) || 现代网络中的默认选择;可能导致"死神经元"
    | '''ReLU''' || <math>\max(0, z)</math> || [0, ∞) || 现代网络的默认选择;可能导致"死神经元"
    |-
    |-
    | '''Leaky ReLU''' || <math>\max(\alpha z, z)</math>,其中 <math>\alpha > 0</math> 较小 || (−∞, ∞) || 解决死神经元问题
    | '''Leaky ReLU''' || <math>\max(\alpha z, z)</math>,其中 <math>\alpha > 0</math> 较小 || (−∞, ∞) || 解决死神经元问题
    |-
    |-
    | '''Softmax''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || 用于多分类输出层
    | '''Softmax''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || 用于多类分类的输出层
    |}
    |}

    Revision as of 22:02, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    {| class="wikitable"
    |-
    ! Function !! Formula !! Range !! Notes
    |-
    | '''Sigmoid''' || <math>\sigma(z) = \frac{1}{1+e^{-z}}</math> || (0, 1) || Historically popular; suffers from vanishing gradients
    |-
    | '''Tanh''' || <math>\tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}</math> || (−1, 1) || Zero-centred; still saturates for large inputs
    |-
    | '''ReLU''' || <math>\max(0, z)</math> || [0, ∞) || Default choice in modern networks; can cause "dead neurons"
    |-
    | '''Leaky ReLU''' || <math>\max(\alpha z, z)</math> for small <math>\alpha > 0</math> || (−∞, ∞) || Addresses the dead-neuron problem
    |-
    | '''{{Term|softmax}}''' || <math>\frac{e^{z_i}}{\sum_j e^{z_j}}</math> || (0, 1) || Used in output layer for multi-class classification
    |}
    函数 公式 范围 备注
    Sigmoid $ \sigma(z) = \frac{1}{1+e^{-z}} $ (0, 1) 历史上很流行;存在梯度消失问题
    Tanh $ \tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}} $ (−1, 1) 以零为中心;对大输入仍会饱和
    ReLU $ \max(0, z) $ [0, ∞) 现代网络的默认选择;可能导致"死神经元"
    Leaky ReLU $ \max(\alpha z, z) $,其中 $ \alpha > 0 $ 较小 (−∞, ∞) 解决死神经元问题
    Softmax $ \frac{e^{z_i}}{\sum_j e^{z_j}} $ (0, 1) 用于多类分类的输出层