Translations:Attention Is All You Need/16/en
The encoder consists of six identical layers, each containing a multi-head self-attention sublayer followed by a position-wise feed-forward network, with residual connections and layer normalization around each sublayer. The decoder adds a third sublayer that performs multi-head attention over the encoder output, and masks future positions in the self-attention to preserve the autoregressive property.