Translations:Word Embeddings/24/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
    Line 1: Line 1:
    * '''Negative sampling''' — instead of computing the full softmax, the model contrasts the true context word against <math>k</math> randomly sampled "negative" words.
    * '''Negative sampling''' — instead of computing the full {{Term|softmax}}, the model contrasts the true context word against <math>k</math> randomly sampled "negative" words.
    * '''Hierarchical softmax''' — organises the vocabulary in a binary tree, reducing the softmax cost from <math>O(V)</math> to <math>O(\log V)</math>.
    * '''Hierarchical {{Term|softmax}}''' — organises the vocabulary in a binary tree, reducing the {{Term|softmax}} cost from <math>O(V)</math> to <math>O(\log V)</math>.

    Revision as of 19:42, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Word Embeddings)
    * '''Negative sampling''' — instead of computing the full {{Term|softmax}}, the model contrasts the true context word against <math>k</math> randomly sampled "negative" words.
    * '''Hierarchical {{Term|softmax}}''' — organises the vocabulary in a binary tree, reducing the {{Term|softmax}} cost from <math>O(V)</math> to <math>O(\log V)</math>.
    • Negative sampling — instead of computing the full softmax, the model contrasts the true context word against $ k $ randomly sampled "negative" words.
    • Hierarchical softmax — organises the vocabulary in a binary tree, reducing the softmax cost from $ O(V) $ to $ O(\log V) $.