Loss Functions

    From Marovi AI
    Revision as of 04:01, 24 April 2026 by DeployBot (talk | contribs) ([deploy-bot] Deploy from CI (775ba6e))
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
    Languages: English | Español | 中文
    Article
    Topic area Machine Learning
    Difficulty Introductory

    Loss functions (also called cost functions or objective functions) quantify how far a model's predictions are from the desired output. Minimising the loss function is the central goal of the training process in machine learning: the optimisation algorithm adjusts the model's parameters to drive the loss as low as possible.

    Purpose

    A loss function maps the model's prediction $ \hat{y} $ and the true target $ y $ to a non-negative real number. Formally, for a single example:

    $ \ell: \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}_{\geq 0} $

    Over a dataset of $ N $ examples, the total loss is typically the average:

    $ L(\theta) = \frac{1}{N}\sum_{i=1}^{N}\ell\bigl(y_i,\, \hat{y}_i(\theta)\bigr) $

    The choice of loss function encodes the problem's structure — what kind of errors matter and how severely they should be penalised. A poorly chosen loss can lead to a model that optimises the wrong objective.

    Mean squared error

    Mean squared error (MSE) is the default loss for regression tasks:

    $ L_{\text{MSE}} = \frac{1}{N}\sum_{i=1}^{N}(y_i - \hat{y}_i)^2 $

    MSE penalises large errors quadratically, making it sensitive to outliers. Its gradient is straightforward:

    $ \frac{\partial}{\partial \hat{y}_i} (y_i - \hat{y}_i)^2 = -2(y_i - \hat{y}_i) $

    A closely related variant is mean absolute error (MAE), $ \frac{1}{N}\sum|y_i - \hat{y}_i| $, which is more robust to outliers but has a non-smooth gradient at zero. The Huber loss combines both: it behaves like MSE for small errors and like MAE for large ones.

    Cross-entropy loss

    Cross-entropy loss is the standard choice for classification tasks. It measures the dissimilarity between the predicted probability distribution and the true label distribution.

    Binary cross-entropy

    For binary classification with predicted probability $ p $ and true label $ y \in \{0, 1\} $:

    $ L_{\text{BCE}} = -\frac{1}{N}\sum_{i=1}^{N}\bigl[y_i \log p_i + (1 - y_i)\log(1 - p_i)\bigr] $

    This loss is minimised when the predicted probability matches the true label perfectly ($ p = 1 $ when $ y = 1 $ and $ p = 0 $ when $ y = 0 $).

    Categorical cross-entropy

    For multi-class classification with $ C $ classes and predicted probability vector $ \hat{\mathbf{y}} $:

    $ L_{\text{CE}} = -\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C} y_{i,c} \log \hat{y}_{i,c} $

    When the true labels are one-hot encoded, only the term corresponding to the correct class survives.

    Hinge loss

    Hinge loss is associated with support vector machines (SVMs) and maximum-margin classifiers. For a binary classification problem with labels $ y \in \{-1, +1\} $ and raw model output $ s $:

    $ L_{\text{hinge}} = \frac{1}{N}\sum_{i=1}^{N}\max(0,\; 1 - y_i \, s_i) $

    The hinge loss is zero when the prediction has the correct sign with margin at least 1, and increases linearly otherwise. Because it is not differentiable at the hinge point, subgradient methods are used for optimisation.

    Other common loss functions

    Loss Formula Typical use
    Huber $ \begin{cases}\tfrac{1}{2}(y-\hat{y})^2 & |y-\hat{y}|\leq\delta \\ \delta(|y-\hat{y}|-\tfrac{\delta}{2}) & \text{otherwise}\end{cases} $ Robust regression
    KL divergence $ \sum_c p_c \log\frac{p_c}{q_c} $ Distribution matching, VAEs
    Focal loss $ -\alpha(1-p_t)^\gamma \log p_t $ Imbalanced classification
    CTC loss Dynamic programming over alignments Speech recognition, OCR
    Triplet loss $ \max(0,\; d(a,p) - d(a,n) + m) $ Metric learning, face verification

    Choosing the right loss

    The appropriate loss function depends on the task:

    • Regression — MSE is the default; switch to MAE or Huber if outliers are a concern.
    • Binary classification — binary cross-entropy with sigmoid output.
    • Multi-class classification — categorical cross-entropy with softmax output.
    • Multi-label classification — binary cross-entropy applied independently per label.
    • Ranking or retrieval — contrastive loss, triplet loss, or listwise ranking losses.

    An important consideration is whether the loss is calibrated — i.e., whether minimising it yields well-calibrated predicted probabilities. Cross-entropy is a proper scoring rule and produces calibrated probabilities, while hinge loss does not.

    Regularisation terms

    In practice, the total objective often includes a regularisation term that penalises model complexity:

    $ J(\theta) = L(\theta) + \lambda \, R(\theta) $

    where $ \lambda $ controls the strength of regularisation. Common choices include L2 regularisation ($ R = \|\theta\|_2^2 $) and L1 regularisation ($ R = \|\theta\|_1 $). See Overfitting and Regularization for more detail.

    See also

    References

    • Bishop, C. M. (2006). Pattern Recognition and Machine Learning, Chapter 1. Springer.
    • Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep Learning, Chapters 6 and 8. MIT Press.
    • Lin, T.-Y. et al. (2017). "Focal Loss for Dense Object Detection". ICCV.
    • Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.