Loss Functions: Difference between revisions

    From Marovi AI
    (Marked this version for translation)
    Tag: Reverted
    ([deploy-bot] Deploy from CI (5358a41))
    Tags: ci-deploy Manual revert
     
    (3 intermediate revisions by the same user not shown)
    Line 7: Line 7:
    '''Loss functions''' (also called '''cost functions''' or '''objective functions''') quantify how far a model's predictions are from the desired output. Minimising the loss function is the central goal of the training process in machine learning: the optimisation algorithm adjusts the model's parameters to drive the loss as low as possible.
    '''Loss functions''' (also called '''cost functions''' or '''objective functions''') quantify how far a model's predictions are from the desired output. Minimising the loss function is the central goal of the training process in machine learning: the optimisation algorithm adjusts the model's parameters to drive the loss as low as possible.


    == Purpose == <!--T:2-->
    <!--T:2-->
    == Purpose ==


    <!--T:3-->
    <!--T:3-->
    Line 24: Line 25:
    The choice of loss function encodes the problem's structure — what kind of errors matter and how severely they should be penalised. A poorly chosen loss can lead to a model that optimises the wrong objective.
    The choice of loss function encodes the problem's structure — what kind of errors matter and how severely they should be penalised. A poorly chosen loss can lead to a model that optimises the wrong objective.


    == Mean squared error == <!--T:8-->
    <!--T:8-->
    == Mean squared error ==


    <!--T:9-->
    <!--T:9-->
    Line 41: Line 43:
    A closely related variant is '''mean absolute error''' (MAE), <math>\frac{1}{N}\sum|y_i - \hat{y}_i|</math>, which is more robust to outliers but has a non-smooth gradient at zero. The '''Huber loss''' combines both: it behaves like MSE for small errors and like MAE for large ones.
    A closely related variant is '''mean absolute error''' (MAE), <math>\frac{1}{N}\sum|y_i - \hat{y}_i|</math>, which is more robust to outliers but has a non-smooth gradient at zero. The '''Huber loss''' combines both: it behaves like MSE for small errors and like MAE for large ones.


    == Cross-entropy loss == <!--T:14-->
    <!--T:14-->
    == Cross-entropy loss ==


    <!--T:15-->
    <!--T:15-->
    '''Cross-entropy loss''' is the standard choice for classification tasks. It measures the dissimilarity between the predicted probability distribution and the true label distribution.
    '''Cross-entropy loss''' is the standard choice for classification tasks. It measures the dissimilarity between the predicted probability distribution and the true label distribution.


    === Binary cross-entropy === <!--T:16-->
    <!--T:16-->
    === Binary cross-entropy ===


    <!--T:17-->
    <!--T:17-->
    Line 57: Line 61:
    This loss is minimised when the predicted probability matches the true label perfectly (<math>p = 1</math> when <math>y = 1</math> and <math>p = 0</math> when <math>y = 0</math>).
    This loss is minimised when the predicted probability matches the true label perfectly (<math>p = 1</math> when <math>y = 1</math> and <math>p = 0</math> when <math>y = 0</math>).


    === Categorical cross-entropy === <!--T:20-->
    <!--T:20-->
    === Categorical cross-entropy ===


    <!--T:21-->
    <!--T:21-->
    Line 68: Line 73:
    When the true labels are one-hot encoded, only the term corresponding to the correct class survives.
    When the true labels are one-hot encoded, only the term corresponding to the correct class survives.


    == Hinge loss == <!--T:24-->
    <!--T:24-->
    == Hinge loss ==


    <!--T:25-->
    <!--T:25-->
    Line 79: Line 85:
    The hinge loss is zero when the prediction has the correct sign with margin at least 1, and increases linearly otherwise. Because it is not differentiable at the hinge point, subgradient methods are used for optimisation.
    The hinge loss is zero when the prediction has the correct sign with margin at least 1, and increases linearly otherwise. Because it is not differentiable at the hinge point, subgradient methods are used for optimisation.


    == Other common loss functions == <!--T:28-->
    <!--T:28-->
    == Other common loss functions ==


    <!--T:29-->
    <!--T:29-->
    Line 97: Line 104:
    |}
    |}


    == Choosing the right loss == <!--T:30-->
    <!--T:30-->
    == Choosing the right loss ==


    <!--T:31-->
    <!--T:31-->
    Line 112: Line 120:
    An important consideration is whether the loss is '''calibrated''' — i.e., whether minimising it yields well-calibrated predicted probabilities. Cross-entropy is a proper scoring rule and produces calibrated probabilities, while hinge loss does not.
    An important consideration is whether the loss is '''calibrated''' — i.e., whether minimising it yields well-calibrated predicted probabilities. Cross-entropy is a proper scoring rule and produces calibrated probabilities, while hinge loss does not.


    == Regularisation terms == <!--T:34-->
    <!--T:34-->
    == Regularisation terms ==


    <!--T:35-->
    <!--T:35-->
    Line 123: Line 132:
    where <math>\lambda</math> controls the strength of regularisation. Common choices include L2 regularisation (<math>R = \|\theta\|_2^2</math>) and L1 regularisation (<math>R = \|\theta\|_1</math>). See [[Overfitting and Regularization]] for more detail.
    where <math>\lambda</math> controls the strength of regularisation. Common choices include L2 regularisation (<math>R = \|\theta\|_2^2</math>) and L1 regularisation (<math>R = \|\theta\|_1</math>). See [[Overfitting and Regularization]] for more detail.


    == See also == <!--T:38-->
    <!--T:38-->
    == See also ==


    <!--T:39-->
    <!--T:39-->
    Line 132: Line 142:
    * [[Stochastic Gradient Descent]]
    * [[Stochastic Gradient Descent]]


    == References == <!--T:40-->
    <!--T:40-->
    == References ==


    <!--T:41-->
    <!--T:41-->

    Latest revision as of 19:43, 27 April 2026

    Other languages:
    Article
    Topic area Machine Learning
    Difficulty Introductory

    Loss functions (also called cost functions or objective functions) quantify how far a model's predictions are from the desired output. Minimising the loss function is the central goal of the training process in machine learning: the optimisation algorithm adjusts the model's parameters to drive the loss as low as possible.

    Purpose

    A loss function maps the model's prediction $ \hat{y} $ and the true target $ y $ to a non-negative real number. Formally, for a single example:

    $ \ell: \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}_{\geq 0} $

    Over a dataset of $ N $ examples, the total loss is typically the average:

    $ L(\theta) = \frac{1}{N}\sum_{i=1}^{N}\ell\bigl(y_i,\, \hat{y}_i(\theta)\bigr) $

    The choice of loss function encodes the problem's structure — what kind of errors matter and how severely they should be penalised. A poorly chosen loss can lead to a model that optimises the wrong objective.

    Mean squared error

    Mean squared error (MSE) is the default loss for regression tasks:

    $ L_{\text{MSE}} = \frac{1}{N}\sum_{i=1}^{N}(y_i - \hat{y}_i)^2 $

    MSE penalises large errors quadratically, making it sensitive to outliers. Its gradient is straightforward:

    $ \frac{\partial}{\partial \hat{y}_i} (y_i - \hat{y}_i)^2 = -2(y_i - \hat{y}_i) $

    A closely related variant is mean absolute error (MAE), $ \frac{1}{N}\sum|y_i - \hat{y}_i| $, which is more robust to outliers but has a non-smooth gradient at zero. The Huber loss combines both: it behaves like MSE for small errors and like MAE for large ones.

    Cross-entropy loss

    Cross-entropy loss is the standard choice for classification tasks. It measures the dissimilarity between the predicted probability distribution and the true label distribution.

    Binary cross-entropy

    For binary classification with predicted probability $ p $ and true label $ y \in \{0, 1\} $:

    $ L_{\text{BCE}} = -\frac{1}{N}\sum_{i=1}^{N}\bigl[y_i \log p_i + (1 - y_i)\log(1 - p_i)\bigr] $

    This loss is minimised when the predicted probability matches the true label perfectly ($ p = 1 $ when $ y = 1 $ and $ p = 0 $ when $ y = 0 $).

    Categorical cross-entropy

    For multi-class classification with $ C $ classes and predicted probability vector $ \hat{\mathbf{y}} $:

    $ L_{\text{CE}} = -\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C} y_{i,c} \log \hat{y}_{i,c} $

    When the true labels are one-hot encoded, only the term corresponding to the correct class survives.

    Hinge loss

    Hinge loss is associated with support vector machines (SVMs) and maximum-margin classifiers. For a binary classification problem with labels $ y \in \{-1, +1\} $ and raw model output $ s $:

    $ L_{\text{hinge}} = \frac{1}{N}\sum_{i=1}^{N}\max(0,\; 1 - y_i \, s_i) $

    The hinge loss is zero when the prediction has the correct sign with margin at least 1, and increases linearly otherwise. Because it is not differentiable at the hinge point, subgradient methods are used for optimisation.

    Other common loss functions

    Loss Formula Typical use
    Huber $ \begin{cases}\tfrac{1}{2}(y-\hat{y})^2 & |y-\hat{y}|\leq\delta \\ \delta(|y-\hat{y}|-\tfrac{\delta}{2}) & \text{otherwise}\end{cases} $ Robust regression
    KL divergence $ \sum_c p_c \log\frac{p_c}{q_c} $ Distribution matching, VAEs
    Focal loss $ -\alpha(1-p_t)^\gamma \log p_t $ Imbalanced classification
    CTC loss Dynamic programming over alignments Speech recognition, OCR
    Triplet loss $ \max(0,\; d(a,p) - d(a,n) + m) $ Metric learning, face verification

    Choosing the right loss

    The appropriate loss function depends on the task:

    • Regression — MSE is the default; switch to MAE or Huber if outliers are a concern.
    • Binary classification — binary cross-entropy with sigmoid output.
    • Multi-class classification — categorical cross-entropy with softmax output.
    • Multi-label classification — binary cross-entropy applied independently per label.
    • Ranking or retrieval — contrastive loss, triplet loss, or listwise ranking losses.

    An important consideration is whether the loss is calibrated — i.e., whether minimising it yields well-calibrated predicted probabilities. Cross-entropy is a proper scoring rule and produces calibrated probabilities, while hinge loss does not.

    Regularisation terms

    In practice, the total objective often includes a regularisation term that penalises model complexity:

    $ J(\theta) = L(\theta) + \lambda \, R(\theta) $

    where $ \lambda $ controls the strength of regularisation. Common choices include L2 regularisation ($ R = \|\theta\|_2^2 $) and L1 regularisation ($ R = \|\theta\|_1 $). See Overfitting and Regularization for more detail.

    See also

    References

    • Bishop, C. M. (2006). Pattern Recognition and Machine Learning, Chapter 1. Springer.
    • Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep Learning, Chapters 6 and 8. MIT Press.
    • Lin, T.-Y. et al. (2017). "Focal Loss for Dense Object Detection". ICCV.
    • Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.