Translations:Attention Mechanisms/31/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) Tag: Manual revert |
| (One intermediate revision by the same user not shown) | |
(No difference)
| |
Latest revision as of 23:33, 27 April 2026
Cross-attention is used when queries come from one sequence and keys/values come from another. In encoder-decoder Transformers, the decoder attends to encoder outputs via cross-attention, enabling the model to condition its generation on the full input context.