Translations:BERT Pre-training of Deep Bidirectional Transformers/2/en

    From Marovi AI

    BERT: pre-training of Deep Bidirectional Transformers for Language Understanding is a 2019 paper by Devlin et al. from Google AI Language that introduced BERT (Bidirectional Encoder Representations from Transformers), a method for pre-training deep bidirectional language representations. BERT revolutionized NLP by demonstrating that a single pre-trained model could be fine-tuned to achieve state-of-the-art results on a wide range of downstream tasks with minimal task-specific architecture modifications.