Translations:Overfitting and Regularization/23/en: Difference between revisions
(Importing a new version from external source) Tag: Manual revert |
(Importing a new version from external source) Tag: Manual revert |
||
| Line 1: | Line 1: | ||
At test time, all neurons are active but their outputs are scaled by <math>(1 - p)</math> to compensate for the larger number of active units (or equivalently, outputs are scaled by <math>1/(1-p)</math> during training — '''inverted dropout'''). | At test time, all neurons are active but their outputs are scaled by <math>(1 - p)</math> to compensate for the larger number of active units (or equivalently, outputs are scaled by <math>1/(1-p)</math> during training — '''inverted {{Term|dropout}}'''). | ||
Latest revision as of 23:34, 27 April 2026
At test time, all neurons are active but their outputs are scaled by $ (1 - p) $ to compensate for the larger number of active units (or equivalently, outputs are scaled by $ 1/(1-p) $ during training — inverted dropout).