Translations:Neural Networks/14/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    (Importing a new version from external source)
    Tag: Manual revert
    Line 1: Line 1:
    where <math>g</math> and <math>f</math> are {{Term|activation function|activation functions}}, <math>\mathbf{W}_1, \mathbf{W}_2</math> are weight matrices, and <math>\mathbf{b}_1, \mathbf{b}_2</math> are bias vectors. The hidden layer enables the network to learn nonlinear relationships that a single perceptron cannot capture.
    where <math>g</math> and <math>f</math> are activation functions, <math>\mathbf{W}_1, \mathbf{W}_2</math> are weight matrices, and <math>\mathbf{b}_1, \mathbf{b}_2</math> are bias vectors. The hidden layer enables the network to learn nonlinear relationships that a single perceptron cannot capture.

    Revision as of 22:01, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    where <math>g</math> and <math>f</math> are {{Term|activation function|activation functions}}, <math>\mathbf{W}_1, \mathbf{W}_2</math> are weight matrices, and <math>\mathbf{b}_1, \mathbf{b}_2</math> are bias vectors. The hidden layer enables the network to learn nonlinear relationships that a single perceptron cannot capture.

    where $ g $ and $ f $ are activation functions, $ \mathbf{W}_1, \mathbf{W}_2 $ are weight matrices, and $ \mathbf{b}_1, \mathbf{b}_2 $ are bias vectors. The hidden layer enables the network to learn nonlinear relationships that a single perceptron cannot capture.