Translations:Cross-Entropy Loss/6/en

    From Marovi AI
    Revision as of 21:59, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)

    For a deterministic distribution (one-hot label), $ H(p) = 0 $. Entropy is maximized when all outcomes are equally likely.