Translations:Transfer Learning/18/en

    From Marovi AI
    Revision as of 23:34, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
    • Word2Vec / GloVe — static word embeddings pretrained on large corpora.
    • ELMo — contextualised embeddings from bidirectional LSTMs.
    • BERT (Devlin et al., 2019) — bidirectional transformer pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
    • GPT series — autoregressive Transformers demonstrating that scale and pretraining enable few-shot and zero-shot transfer.