Translations:Attention Mechanisms/27/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) Tag: Manual revert |
||
| (2 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original | Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original {{Term|transformer}} uses sinusoidal encodings: | ||
Latest revision as of 23:33, 27 April 2026
Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original transformer uses sinusoidal encodings: