Translations:Attention Mechanisms/33/en
- Masking: In autoregressive decoding, future positions are masked (set to $ -\infty $ before softmax) to preserve the causal structure.
- Attention dropout: Dropping attention weights randomly during training acts as a regulariser and reduces overfitting to specific alignment patterns.
- Key-value caching: During inference, previously computed key and value vectors are cached to avoid redundant computation, significantly speeding up autoregressive generation.