Translations:Attention Mechanisms/27/en
Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original transformer uses sinusoidal encodings:
Because self-attention is permutation-invariant (it treats the input as an unordered set), positional information must be injected explicitly. The original transformer uses sinusoidal encodings: