Translations:Softmax Function/34/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) |
||
| Line 1: | Line 1: | ||
* ''' | * '''{{Term|attention}} mechanisms''': Softmax normalizes alignment scores into {{Term|attention}} weights in the [[Attention Mechanisms|Transformer]] architecture. | ||
* '''Reinforcement learning''': Softmax over action-value estimates produces a stochastic policy (Boltzmann exploration). | * '''Reinforcement learning''': Softmax over action-value estimates produces a stochastic policy (Boltzmann exploration). | ||
* '''Mixture models''': Softmax parameterizes mixing coefficients in mixture-of-experts architectures. | * '''Mixture models''': Softmax parameterizes mixing coefficients in {{Term|mixture of experts|mixture-of-experts}} architectures. | ||
Revision as of 19:42, 27 April 2026
- attention mechanisms: Softmax normalizes alignment scores into attention weights in the Transformer architecture.
- Reinforcement learning: Softmax over action-value estimates produces a stochastic policy (Boltzmann exploration).
- Mixture models: Softmax parameterizes mixing coefficients in mixture-of-experts architectures.