Translations:Transfer Learning/18/en
- Word2Vec / GloVe — static word embeddings pretrained on large corpora.
- ELMo — contextualised embeddings from bidirectional LSTMs.
- BERT (Devlin et al., 2019) — bidirectional Transformer pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
- GPT series — autoregressive Transformers demonstrating that scale and pretraining enable few-shot and zero-shot transfer.