Translations:BERT Pre-training of Deep Bidirectional Transformers/9/en
bert uses the encoder portion of the transformer architecture. The model takes a sequence of tokens as input and produces a contextualized embedding for each token. Two model sizes were released: bert-Base (12 layers, 768 hidden units, 12 Lua error: Internal error: The interpreter exited with status 1. heads, 110M parameters) and Lua error: Internal error: The interpreter exited with status 1.-Large (24 layers, 1024 hidden units, 16 Lua error: Internal error: The interpreter exited with status 1. heads, 340M parameters).