Translations:Word Embeddings/24/en
- Negative sampling — instead of computing the full softmax, the model contrasts the true context word against $ k $ randomly sampled "negative" words.
- Hierarchical softmax — organises the vocabulary in a binary tree, reducing the softmax cost from $ O(V) $ to $ O(\log V) $.