Translations:Overfitting and Regularization/23/en

    From Marovi AI
    Revision as of 23:34, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    At test time, all neurons are active but their outputs are scaled by $ (1 - p) $ to compensate for the larger number of active units (or equivalently, outputs are scaled by $ 1/(1-p) $ during training — inverted dropout).