Translations:Overfitting and Regularization/15/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    Tag: Manual revert
    (Importing a new version from external source)
    Tag: Manual revert
     
    Line 1: Line 1:
    L2 regularization is equivalent to placing a Gaussian prior on the weights from a Bayesian perspective. It encourages small, distributed weights and discourages any single weight from becoming excessively large.
    {{Term|weight decay|L2 regularization}} is equivalent to placing a Gaussian prior on the weights from a Bayesian perspective. It encourages small, distributed weights and discourages any single weight from becoming excessively large.

    Latest revision as of 23:34, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Overfitting and Regularization)
    {{Term|weight decay|L2 regularization}} is equivalent to placing a Gaussian prior on the weights from a Bayesian perspective. It encourages small, distributed weights and discourages any single weight from becoming excessively large.

    L2 regularization is equivalent to placing a Gaussian prior on the weights from a Bayesian perspective. It encourages small, distributed weights and discourages any single weight from becoming excessively large.