Translations:Cross-Entropy Loss/1/en

    From Marovi AI
    Revision as of 23:34, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    Cross-entropy loss (also called log loss) is the most widely used loss function for classification tasks in machine learning. Rooted in information theory, it measures the dissimilarity between the true label distribution and the model's predicted probability distribution, providing a smooth, differentiable objective that drives probabilistic classifiers toward confident, correct predictions.