Translations:Attention Is All You Need/16/en

    From Marovi AI
    Revision as of 21:39, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    The encoder consists of six identical layers, each containing a multi-head self-attention sublayer followed by a position-wise feed-forward network, with residual connections and layer normalization around each sublayer. The decoder adds a third sublayer that performs multi-head attention over the encoder output, and masks future positions in the self-attention to preserve the autoregressive property.