All translations
Enter a message name below to show all available translations.
Found 3 translations.
| Name | Current message text |
|---|---|
| h English (en) | * '''Negative sampling''' — instead of computing the full {{Term|softmax}}, the model contrasts the true context word against <math>k</math> randomly sampled "negative" words. * '''Hierarchical {{Term|softmax}}''' — organises the vocabulary in a binary tree, reducing the {{Term|softmax}} cost from <math>O(V)</math> to <math>O(\log V)</math>. |
| h Spanish (es) | * '''Muestreo negativo''' — en lugar de calcular la {{Term|softmax|softmax}} completa, el modelo contrasta la palabra de contexto verdadera con <math>k</math> palabras «negativas» muestreadas aleatoriamente. * '''{{Term|softmax|Softmax}} jerárquica''' — organiza el vocabulario en un árbol binario, reduciendo el coste de la {{Term|softmax|softmax}} de <math>O(V)</math> a <math>O(\log V)</math>. |
| h Chinese (zh) | * '''负采样''' — 模型不计算完整的 {{Term|softmax|softmax}},而是将真实的上下文词与 <math>k</math> 个随机采样的「负」词进行对比。 * '''分层 {{Term|softmax|softmax}}''' — 将词汇表组织成二叉树,把 {{Term|softmax|softmax}} 的代价从 <math>O(V)</math> 降到 <math>O(\log V)</math>。 |