Translations:Attention Mechanisms/27/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
    Line 1: Line 1:
    Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original Transformer uses sinusoidal encodings:
    Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original {{Term|transformer}} uses sinusoidal encodings:

    Revision as of 19:41, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Attention Mechanisms)
    Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original {{Term|transformer}} uses sinusoidal encodings:

    Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original transformer uses sinusoidal encodings: