Translations:Word Embeddings/24/en

    From Marovi AI
    Revision as of 00:31, 27 April 2026 by FuzzyBot (talk | contribs) (Importing a new version from external source)
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
    • Negative sampling — instead of computing the full softmax, the model contrasts the true context word against $ k $ randomly sampled "negative" words.
    • Hierarchical softmax — organises the vocabulary in a binary tree, reducing the softmax cost from $ O(V) $ to $ O(\log V) $.