Efficient Estimation of Word Representations: Difference between revisions

    From Marovi AI
    ([deploy-bot] Convert Efficient Estimation of Word Representations to Translate-extension page)
     
    ([deploy-bot] Drop {{LanguageBar}} (v1.4.1))
    Tag: content-generation
     
    (One intermediate revision by the same user not shown)
    Line 1: Line 1:
    <languages />
    <languages />
    {{LanguageBar | page = Efficient Estimation of Word Representations}}


    <translate>
    <translate>

    Latest revision as of 02:35, 27 April 2026

    Other languages:
    Research Paper
    Authors Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean
    Year 2013
    Venue ICLR Workshop
    Topic area NLP
    Difficulty Research
    arXiv 1301.3781
    PDF Download PDF

    Efficient Estimation of Word Representations in Vector Space is a 2013 paper by Mikolov et al. from Google that introduced Word2Vec, a family of computationally efficient methods for learning distributed word representations (word embeddings) from large text corpora. The paper proposed two novel architectures — Continuous Bag-of-Words (CBOW) and Skip-gram — that could be trained on billions of words in hours, producing vector representations that captured syntactic and semantic word relationships, including the celebrated word analogy property.

    Overview

    Prior work on distributed word representations used neural language models that jointly learned word vectors and a language model, but these were computationally expensive and difficult to scale to very large corpora. Simpler models like latent semantic analysis (LSA) captured co-occurrence statistics but failed to preserve linear regularities between words.

    Mikolov et al. proposed two architectures that stripped away the complexity of full neural language models — removing the nonlinear hidden layer — to focus on learning word vectors efficiently. The resulting models could be trained on corpora of billions of words in a single day using modest computational resources, while producing word vectors of surprisingly high quality.

    Key Contributions

    • CBOW model: An architecture that predicts a target word from its surrounding context words, using the average of context word vectors as input.
    • Skip-gram model: An architecture that predicts surrounding context words given a target word, effectively inverting the CBOW objective.
    • Word analogy evaluation: Introduction of the word analogy test set for evaluating the quality of word vectors, demonstrating that vector arithmetic captures semantic and syntactic relationships.
    • Scalability: Demonstration that high-quality word vectors could be learned from very large corpora (up to 6 billion tokens) with training times of under a day.

    Methods

    Both models operate on a sliding window over the text corpus and learn to predict words from their context (CBOW) or context from words (Skip-gram).

    The Continuous Bag-of-Words (CBOW) model predicts the center word $ w_t $ given a window of context words $ \{w_{t-c}, \ldots, w_{t-1}, w_{t+1}, \ldots, w_{t+c}\} $. The input is the average of the context word vectors:

    $ \mathbf{h} = \frac{1}{2c} \sum_{-c \leq j \leq c, j \neq 0} \mathbf{v}_{w_{t+j}} $

    The probability of the target word is computed using a softmax:

    $ P(w_t \mid \text{context}) = \frac{\exp(\mathbf{v}'_{w_t} \cdot \mathbf{h})}{\sum_{w \in V} \exp(\mathbf{v}'_w \cdot \mathbf{h})} $

    where $ \mathbf{v}_w $ and $ \mathbf{v}'_w $ are the input and output vector representations of word $ w $.

    The Skip-gram model reverses this, predicting context words from the center word. Given the center word $ w_t $, the model maximizes:

    $ \frac{1}{T} \sum_{t=1}^{T} \sum_{-c \leq j \leq c, j \neq 0} \log P(w_{t+j} \mid w_t) $

    where:

    $ P(w_O \mid w_I) = \frac{\exp(\mathbf{v}'_{w_O} \cdot \mathbf{v}_{w_I})}{\sum_{w \in V} \exp(\mathbf{v}'_w \cdot \mathbf{v}_{w_I})} $

    Computing the full softmax over a large vocabulary is prohibitively expensive. The paper used hierarchical softmax with a Huffman tree to reduce the complexity from $ O(V) $ to $ O(\log V) $. A follow-up paper introduced negative sampling as a simpler and often more effective alternative.

    A key architectural decision was the removal of the nonlinear hidden layer present in prior neural language models. This simplification was crucial for computational efficiency and, surprisingly, did not harm the quality of the learned representations.

    Results

    The most striking result was the emergence of linear relationships between word vectors. The learned representations supported word analogies through vector arithmetic:

    $ \text{vector}(\text{"king"}) - \text{vector}(\text{"man"}) + \text{vector}(\text{"woman"}) \approx \text{vector}(\text{"queen"}) $

    The paper introduced a comprehensive word analogy test set with 8,869 semantic and 10,675 syntactic analogy questions. Results showed:

    • Skip-gram achieved the best semantic accuracy (55%) and competitive syntactic accuracy on a 783M-word training corpus.
    • CBOW was faster to train and achieved the best syntactic accuracy, with competitive semantic accuracy.
    • Accuracy improved consistently with training data size and vector dimensionality, up to a point of diminishing returns.
    • Both models substantially outperformed prior approaches including NNLM and RNNLM on the analogy task while training orders of magnitude faster.

    Training on a 6-billion-word Google News corpus with 300-dimensional vectors (using the follow-up negative sampling approach) produced the widely used pre-trained Word2Vec vectors.

    Impact

    Word2Vec transformed NLP by establishing word embeddings as the standard input representation for neural NLP systems. Before Word2Vec, most NLP systems relied on sparse, high-dimensional representations like one-hot vectors or TF-IDF. Word2Vec demonstrated that dense, low-dimensional vectors could capture rich linguistic structure and transfer meaningfully across tasks.

    The analogy property captured the public imagination and became an iconic example of learned representations encoding meaningful structure. Word2Vec embeddings were used as features in virtually every NLP system of the mid-2010s, from sentiment analysis to machine translation.

    The models directly influenced subsequent work on embeddings, including GloVe, FastText, and contextual embeddings like ELMo and BERT. While static word vectors have been largely superseded by contextual representations from large language models, Word2Vec remains a foundational reference point and is still used in applications where computational efficiency is paramount.

    See also

    References

    • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. ICLR 2013 Workshop. arXiv:1301.3781
    • Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality. NeurIPS 2013. arXiv:1310.4546.
    • Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global Vectors for Word Representation. EMNLP 2014.