Translations:Transfer Learning/18/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) |
||
| Line 1: | Line 1: | ||
* '''Word2Vec / GloVe''' — static word embeddings pretrained on large corpora. | * '''Word2Vec / GloVe''' — static word {{Term|embedding|embeddings}} pretrained on large corpora. | ||
* '''ELMo''' — contextualised embeddings from bidirectional LSTMs. | * '''ELMo''' — contextualised {{Term|embedding|embeddings}} from bidirectional {{Term|long short-term memory|LSTMs}}. | ||
* '''BERT''' (Devlin et al., 2019) — bidirectional | * '''BERT''' (Devlin et al., 2019) — bidirectional {{Term|transformer}} pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more. | ||
* '''GPT series''' — autoregressive Transformers demonstrating that scale and pretraining enable few-shot and zero-shot transfer. | * '''GPT series''' — autoregressive {{Term|transformer|Transformers}} demonstrating that scale and {{Term|pre-training|pretraining}} enable few-shot and zero-shot transfer. | ||
Revision as of 19:42, 27 April 2026
- Word2Vec / GloVe — static word embeddings pretrained on large corpora.
- ELMo — contextualised embeddings from bidirectional LSTMs.
- BERT (Devlin et al., 2019) — bidirectional transformer pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
- GPT series — autoregressive Transformers demonstrating that scale and pretraining enable few-shot and zero-shot transfer.