Translations:Attention Mechanisms/33/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) Tag: Manual revert |
| (One intermediate revision by the same user not shown) | |
(No difference)
| |
Latest revision as of 23:33, 27 April 2026
- Masking: In autoregressive decoding, future positions are masked (set to $ -\infty $ before softmax) to preserve the causal structure.
- Attention dropout: Dropping attention weights randomly during training acts as a regulariser and reduces overfitting to specific alignment patterns.
- Key-value caching: During inference, previously computed key and value vectors are cached to avoid redundant computation, significantly speeding up autoregressive generation.