Translations:Overfitting and Regularization/23/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
    Line 1: Line 1:
    At test time, all neurons are active but their outputs are scaled by <math>(1 - p)</math> to compensate for the larger number of active units (or equivalently, outputs are scaled by <math>1/(1-p)</math> during training — '''inverted dropout''').
    At test time, all neurons are active but their outputs are scaled by <math>(1 - p)</math> to compensate for the larger number of active units (or equivalently, outputs are scaled by <math>1/(1-p)</math> during training — '''inverted {{Term|dropout}}''').

    Revision as of 19:42, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Overfitting and Regularization)
    At test time, all neurons are active but their outputs are scaled by <math>(1 - p)</math> to compensate for the larger number of active units (or equivalently, outputs are scaled by <math>1/(1-p)</math> during training — '''inverted {{Term|dropout}}''').

    At test time, all neurons are active but their outputs are scaled by $ (1 - p) $ to compensate for the larger number of active units (or equivalently, outputs are scaled by $ 1/(1-p) $ during training — inverted dropout).