Translations:Word Embeddings/24/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) Tag: Manual revert |
||
| (2 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
* '''Negative sampling''' — instead of computing the full softmax, the model contrasts the true context word against <math>k</math> randomly sampled "negative" words. | * '''Negative sampling''' — instead of computing the full {{Term|softmax}}, the model contrasts the true context word against <math>k</math> randomly sampled "negative" words. | ||
* '''Hierarchical softmax''' — organises the vocabulary in a binary tree, reducing the softmax cost from <math>O(V)</math> to <math>O(\log V)</math>. | * '''Hierarchical {{Term|softmax}}''' — organises the vocabulary in a binary tree, reducing the {{Term|softmax}} cost from <math>O(V)</math> to <math>O(\log V)</math>. | ||
Latest revision as of 23:34, 27 April 2026
- Negative sampling — instead of computing the full softmax, the model contrasts the true context word against $ k $ randomly sampled "negative" words.
- Hierarchical softmax — organises the vocabulary in a binary tree, reducing the softmax cost from $ O(V) $ to $ O(\log V) $.