Translations:BERT Pre-training of Deep Bidirectional Transformers/16/en

    From Marovi AI
    Revision as of 21:37, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    fine-tuning is straightforward: for each downstream task, task-specific inputs and outputs are plugged into the pre-trained model, and all parameters are fine-tuned end-to-end. For token-level tasks like named entity recognition, each token's final hidden vector is fed into a classification layer. For sequence-level tasks like sentiment analysis, the [CLS] token's representation is used.