Translations:Transfer Learning/18/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
    Line 1: Line 1:
    * '''Word2Vec / GloVe''' — static word embeddings pretrained on large corpora.
    * '''Word2Vec / GloVe''' — static word {{Term|embedding|embeddings}} pretrained on large corpora.
    * '''ELMo''' — contextualised embeddings from bidirectional LSTMs.
    * '''ELMo''' — contextualised {{Term|embedding|embeddings}} from bidirectional {{Term|long short-term memory|LSTMs}}.
    * '''BERT''' (Devlin et al., 2019) — bidirectional Transformer pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
    * '''BERT''' (Devlin et al., 2019) — bidirectional {{Term|transformer}} pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
    * '''GPT series''' — autoregressive Transformers demonstrating that scale and pretraining enable few-shot and zero-shot transfer.
    * '''GPT series''' — autoregressive {{Term|transformer|Transformers}} demonstrating that scale and {{Term|pre-training|pretraining}} enable few-shot and zero-shot transfer.

    Revision as of 19:42, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Transfer Learning)
    * '''Word2Vec / GloVe''' — static word {{Term|embedding|embeddings}} pretrained on large corpora.
    * '''ELMo''' — contextualised {{Term|embedding|embeddings}} from bidirectional {{Term|long short-term memory|LSTMs}}.
    * '''BERT''' (Devlin et al., 2019) — bidirectional {{Term|transformer}} pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
    * '''GPT series''' — autoregressive {{Term|transformer|Transformers}} demonstrating that scale and {{Term|pre-training|pretraining}} enable few-shot and zero-shot transfer.
    • Word2Vec / GloVe — static word embeddings pretrained on large corpora.
    • ELMo — contextualised embeddings from bidirectional LSTMs.
    • BERT (Devlin et al., 2019) — bidirectional transformer pretrained with masked language modelling; fine-tuned for classification, QA, NER, and more.
    • GPT series — autoregressive Transformers demonstrating that scale and pretraining enable few-shot and zero-shot transfer.