Translations:Attention Mechanisms/31/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    Tag: Manual revert
    (Importing a new version from external source)
    Tag: Manual revert
     
    Line 1: Line 1:
    '''Cross-attention''' is used when queries come from one sequence and keys/values come from another. In encoder-decoder Transformers, the decoder attends to encoder outputs via cross-attention, enabling the model to condition its generation on the full input context.
    '''Cross-attention''' is used when queries come from one sequence and keys/values come from another. In encoder-decoder {{Term|transformer|Transformers}}, the decoder attends to encoder outputs via cross-attention, enabling the model to condition its generation on the full input context.

    Latest revision as of 23:33, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Attention Mechanisms)
    '''Cross-attention''' is used when queries come from one sequence and keys/values come from another. In encoder-decoder {{Term|transformer|Transformers}}, the decoder attends to encoder outputs via cross-attention, enabling the model to condition its generation on the full input context.

    Cross-attention is used when queries come from one sequence and keys/values come from another. In encoder-decoder Transformers, the decoder attends to encoder outputs via cross-attention, enabling the model to condition its generation on the full input context.