All translations

Enter a message name below to show all available translations.

Message

Found 3 translations.

NameCurrent message text
 h English (en)'''Efficient Estimation of Word Representations in Vector Space''' is a 2013 paper by Mikolov et al. from Google that introduced '''Word2Vec''', a family of computationally efficient methods for learning distributed word representations (word {{Term|embedding|embeddings}}) from large text corpora. The paper proposed two novel architectures — '''Continuous Bag-of-Words''' (CBOW) and '''Skip-gram''' — that could be trained on billions of words in hours, producing vector representations that captured syntactic and semantic word relationships, including the celebrated word analogy property.
 h Spanish (es)'''Efficient Estimation of Word Representations in Vector Space''' es un artículo de 2013 de Mikolov et al. de Google que introdujo '''Word2Vec''', una familia de métodos computacionalmente eficientes para aprender representaciones distribuidas de palabras (word {{Term|embedding|embeddings}}) a partir de grandes corpus de texto. El artículo propuso dos arquitecturas novedosas — '''Continuous Bag-of-Words''' (CBOW) y '''Skip-gram''' — que podían entrenarse con miles de millones de palabras en cuestión de horas, produciendo representaciones vectoriales que capturaban relaciones sintácticas y semánticas entre palabras, incluida la célebre propiedad de analogía de palabras.
 h Chinese (zh)'''Efficient Estimation of Word Representations in Vector Space''' 是 Mikolov 等人于 2013 年在 Google 发表的论文,提出了 '''Word2Vec'''——一系列计算高效的方法,可从大规模文本语料中学习分布式词表示(词 {{Term|embedding|嵌入}})。该论文提出了两种新颖的架构——'''Continuous Bag-of-Words'''(CBOW)和 '''Skip-gram''',可在数小时内基于数十亿词进行训练,生成的向量表示能够捕捉词语之间的句法和语义关系,包括著名的词类比性质。