Softmax Function/en: Difference between revisions
(Updating to match new version of source page) |
(Updating to match new version of source page) Tag: Manual revert |
||
| Line 3: | Line 3: | ||
{{ContentMeta | generated_by = claude-opus | model_used = claude-opus-4-6 | generated_date = 2026-03-13}} | {{ContentMeta | generated_by = claude-opus | model_used = claude-opus-4-6 | generated_date = 2026-03-13}} | ||
The '''softmax function''' (also called the '''normalized exponential function''') is a mathematical function that converts a vector of real numbers (''' | The '''softmax function''' (also called the '''normalized exponential function''') is a mathematical function that converts a vector of real numbers ('''logits''') into a probability distribution. It is the standard output activation for multi-class classification in neural networks and plays a central role in models ranging from logistic regression to large language models. | ||
== Definition == | == Definition == | ||
Given a vector of | Given a vector of logits <math>\mathbf{z} = (z_1, z_2, \dots, z_K)</math> for <math>K</math> classes, the softmax function produces: | ||
:<math>\sigma(\mathbf{z})_k = \frac{e^{z_k}}{\sum_{j=1}^{K} e^{z_j}}, \qquad k = 1, \dots, K</math> | :<math>\sigma(\mathbf{z})_k = \frac{e^{z_k}}{\sum_{j=1}^{K} e^{z_j}}, \qquad k = 1, \dots, K</math> | ||
| Line 18: | Line 18: | ||
== Intuition == | == Intuition == | ||
The softmax function amplifies differences between | The softmax function amplifies differences between logits. A logit that is larger than its peers receives a disproportionately large share of the probability mass because the exponential function grows super-linearly. For example: | ||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
! | ! Logits !! Softmax output | ||
|- | |- | ||
| <math>(2.0,\; 1.0,\; 0.1)</math> || <math>(0.659,\; 0.242,\; 0.099)</math> | | <math>(2.0,\; 1.0,\; 0.1)</math> || <math>(0.659,\; 0.242,\; 0.099)</math> | ||
| Line 29: | Line 29: | ||
|} | |} | ||
As the gap between the largest | As the gap between the largest logit and the others increases, the output approaches a one-hot vector. This "winner-take-most" behavior makes softmax well-suited for classification where a single class should dominate. | ||
== Temperature Parameter == | == Temperature Parameter == | ||
| Line 37: | Line 37: | ||
:<math>\sigma(\mathbf{z}; T)_k = \frac{e^{z_k / T}}{\sum_{j=1}^{K} e^{z_j / T}}</math> | :<math>\sigma(\mathbf{z}; T)_k = \frac{e^{z_k / T}}{\sum_{j=1}^{K} e^{z_j / T}}</math> | ||
* <math>T \to 0</math>: The distribution collapses to a | * <math>T \to 0</math>: The distribution collapses to a one-hot vector selecting the argmax — equivalent to a hard decision. | ||
* <math>T = 1</math>: Standard softmax. | * <math>T = 1</math>: Standard softmax. | ||
* <math>T \to \infty</math>: The distribution approaches uniform — all classes become equally likely. | * <math>T \to \infty</math>: The distribution approaches uniform — all classes become equally likely. | ||
| Line 45: | Line 45: | ||
== Numerical Stability == | == Numerical Stability == | ||
A naive implementation of softmax can overflow when | A naive implementation of softmax can overflow when logits are large (e.g., <math>e^{1000}</math> is infinite in floating point). The standard fix subtracts the maximum logit: | ||
:<math>\sigma(\mathbf{z})_k = \frac{e^{z_k - m}}{\sum_{j=1}^{K} e^{z_j - m}}, \qquad m = \max_j z_j</math> | :<math>\sigma(\mathbf{z})_k = \frac{e^{z_k - m}}{\sum_{j=1}^{K} e^{z_j - m}}, \qquad m = \max_j z_j</math> | ||
This is mathematically equivalent (the constant cancels) but ensures the largest exponent is <math>e^0 = 1</math>, preventing overflow. All major | This is mathematically equivalent (the constant cancels) but ensures the largest exponent is <math>e^0 = 1</math>, preventing overflow. All major deep learning frameworks implement this stabilized version automatically. | ||
== Relationship to Sigmoid == | == Relationship to Sigmoid == | ||
| Line 57: | Line 57: | ||
:<math>\sigma(\mathbf{z})_1 = \frac{e^{z_1}}{e^{z_1} + e^{z_2}} = \frac{1}{1 + e^{-z}} = \sigma_{\mathrm{sigmoid}}(z)</math> | :<math>\sigma(\mathbf{z})_1 = \frac{e^{z_1}}{e^{z_1} + e^{z_2}} = \frac{1}{1 + e^{-z}} = \sigma_{\mathrm{sigmoid}}(z)</math> | ||
This is why binary classifiers typically use a single output neuron with a sigmoid | This is why binary classifiers typically use a single output neuron with a sigmoid activation rather than two neurons with softmax — they are mathematically equivalent. | ||
== Gradient == | == Gradient == | ||
| Line 71: | Line 71: | ||
In a typical classification pipeline: | In a typical classification pipeline: | ||
# A neural network produces raw | # A neural network produces raw logits <math>\mathbf{z}</math> from its final linear layer. | ||
# Softmax converts | # Softmax converts logits to probabilities: <math>\hat{\mathbf{y}} = \sigma(\mathbf{z})</math>. | ||
# The predicted class is <math>\hat{c} = \arg\max_k \hat{y}_k</math>. | # The predicted class is <math>\hat{c} = \arg\max_k \hat{y}_k</math>. | ||
# Training uses [[Cross-Entropy Loss]] applied to the predicted distribution and the true labels. | # Training uses [[Cross-Entropy Loss]] applied to the predicted distribution and the true labels. | ||
In practice, the softmax and | In practice, the softmax and cross-entropy are computed jointly for numerical stability (the '''log-softmax''' formulation), and the argmax at inference time can be applied directly to the logits without computing softmax at all. | ||
== Beyond Classification == | == Beyond Classification == | ||
| Line 82: | Line 82: | ||
Softmax appears in many contexts beyond the output layer: | Softmax appears in many contexts beyond the output layer: | ||
* ''' | * '''Attention mechanisms''': Softmax normalizes alignment scores into attention weights in the [[Attention Mechanisms|Transformer]] architecture. | ||
* '''Reinforcement learning''': Softmax over action-value estimates produces a stochastic policy (Boltzmann exploration). | * '''Reinforcement learning''': Softmax over action-value estimates produces a stochastic policy (Boltzmann exploration). | ||
* '''Mixture models''': Softmax parameterizes mixing coefficients in | * '''Mixture models''': Softmax parameterizes mixing coefficients in mixture-of-experts architectures. | ||
== See also == | == See also == | ||
Revision as of 22:06, 27 April 2026
| Article | |
|---|---|
| Topic area | Machine Learning |
| Difficulty | Introductory |
The softmax function (also called the normalized exponential function) is a mathematical function that converts a vector of real numbers (logits) into a probability distribution. It is the standard output activation for multi-class classification in neural networks and plays a central role in models ranging from logistic regression to large language models.
Definition
Given a vector of logits $ \mathbf{z} = (z_1, z_2, \dots, z_K) $ for $ K $ classes, the softmax function produces:
- $ \sigma(\mathbf{z})_k = \frac{e^{z_k}}{\sum_{j=1}^{K} e^{z_j}}, \qquad k = 1, \dots, K $
The output satisfies two properties that make it a valid probability distribution:
- $ \sigma(\mathbf{z})_k > 0 $ for all $ k $ (since the exponential is always positive).
- $ \sum_{k=1}^{K} \sigma(\mathbf{z})_k = 1 $ (by construction).
Intuition
The softmax function amplifies differences between logits. A logit that is larger than its peers receives a disproportionately large share of the probability mass because the exponential function grows super-linearly. For example:
| Logits | Softmax output |
|---|---|
| $ (2.0,\; 1.0,\; 0.1) $ | $ (0.659,\; 0.242,\; 0.099) $ |
| $ (5.0,\; 1.0,\; 0.1) $ | $ (0.993,\; 0.005,\; 0.002) $ |
As the gap between the largest logit and the others increases, the output approaches a one-hot vector. This "winner-take-most" behavior makes softmax well-suited for classification where a single class should dominate.
Temperature Parameter
A temperature parameter $ T > 0 $ controls the sharpness of the distribution:
- $ \sigma(\mathbf{z}; T)_k = \frac{e^{z_k / T}}{\sum_{j=1}^{K} e^{z_j / T}} $
- $ T \to 0 $: The distribution collapses to a one-hot vector selecting the argmax — equivalent to a hard decision.
- $ T = 1 $: Standard softmax.
- $ T \to \infty $: The distribution approaches uniform — all classes become equally likely.
Temperature scaling is widely used in knowledge distillation (Hinton et al., 2015), where a "soft" distribution from a teacher model provides richer training signal than hard labels. It is also used to control randomness in text generation from language models.
Numerical Stability
A naive implementation of softmax can overflow when logits are large (e.g., $ e^{1000} $ is infinite in floating point). The standard fix subtracts the maximum logit:
- $ \sigma(\mathbf{z})_k = \frac{e^{z_k - m}}{\sum_{j=1}^{K} e^{z_j - m}}, \qquad m = \max_j z_j $
This is mathematically equivalent (the constant cancels) but ensures the largest exponent is $ e^0 = 1 $, preventing overflow. All major deep learning frameworks implement this stabilized version automatically.
Relationship to Sigmoid
For the special case of $ K = 2 $ classes, the softmax function reduces to the sigmoid (logistic) function. If we define $ z = z_1 - z_2 $, then:
- $ \sigma(\mathbf{z})_1 = \frac{e^{z_1}}{e^{z_1} + e^{z_2}} = \frac{1}{1 + e^{-z}} = \sigma_{\mathrm{sigmoid}}(z) $
This is why binary classifiers typically use a single output neuron with a sigmoid activation rather than two neurons with softmax — they are mathematically equivalent.
Gradient
The Jacobian of the softmax function with respect to its input is:
- $ \frac{\partial \sigma_k}{\partial z_j} = \sigma_k (\delta_{kj} - \sigma_j) $
where $ \delta_{kj} $ is the Kronecker delta. When combined with Cross-Entropy Loss, the gradient simplifies to $ \hat{y}_k - y_k $, which is computationally efficient and numerically stable.
Use in Classification
In a typical classification pipeline:
- A neural network produces raw logits $ \mathbf{z} $ from its final linear layer.
- Softmax converts logits to probabilities: $ \hat{\mathbf{y}} = \sigma(\mathbf{z}) $.
- The predicted class is $ \hat{c} = \arg\max_k \hat{y}_k $.
- Training uses Cross-Entropy Loss applied to the predicted distribution and the true labels.
In practice, the softmax and cross-entropy are computed jointly for numerical stability (the log-softmax formulation), and the argmax at inference time can be applied directly to the logits without computing softmax at all.
Beyond Classification
Softmax appears in many contexts beyond the output layer:
- Attention mechanisms: Softmax normalizes alignment scores into attention weights in the Transformer architecture.
- Reinforcement learning: Softmax over action-value estimates produces a stochastic policy (Boltzmann exploration).
- Mixture models: Softmax parameterizes mixing coefficients in mixture-of-experts architectures.
See also
References
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer, Section 4.3.4.
- Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep Learning. MIT Press, Section 6.2.2.3.
- Hinton, G., Vinyals, O. and Dean, J. (2015). "Distilling the Knowledge in a Neural Network". arXiv:1503.02531.
- Bridle, J. S. (1990). "Probabilistic Interpretation of Feedforward Classification Network Outputs". Neurocomputing.