Translations:BERT Pre-training of Deep Bidirectional Transformers/9/en

    From Marovi AI
    Revision as of 21:37, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    BERT uses the encoder portion of the transformer architecture. The model takes a sequence of tokens as input and produces a contextualized embedding for each token. Two model sizes were released: BERT-Base (12 layers, 768 hidden units, 12 attention heads, 110M parameters) and BERT-Large (24 layers, 1024 hidden units, 16 attention heads, 340M parameters).