Translations:Convolutional Neural Networks/22/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
    Tag: Manual revert
    (One intermediate revision by the same user not shown)
    (No difference)

    Revision as of 21:57, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Convolutional Neural Networks)
    {| class="wikitable"
    |-
    ! Architecture !! Year !! Key contribution !! Depth
    |-
    | '''LeNet-5''' || 1998 || Pioneered CNNs for handwritten digit recognition (MNIST) || 5 layers
    |-
    | '''AlexNet''' || 2012 || Won ImageNet; popularised ReLU, {{Term|dropout}}, GPU training || 8 layers
    |-
    | '''VGGNet''' || 2014 || Showed depth matters; used only <math>3 \times 3</math> filters throughout || 16–19 layers
    |-
    | '''GoogLeNet (Inception)''' || 2014 || Introduced inception modules with parallel filter sizes || 22 layers
    |-
    | '''ResNet''' || 2015 || Introduced residual connections enabling very deep networks || 50–152+ layers
    |-
    | '''DenseNet''' || 2017 || Connected each layer to every subsequent layer via dense blocks || 121–264 layers
    |-
    | '''EfficientNet''' || 2019 || Compound scaling of depth, width, and resolution || Variable
    |}
    Architecture Year Key contribution Depth
    LeNet-5 1998 Pioneered CNNs for handwritten digit recognition (MNIST) 5 layers
    AlexNet 2012 Won ImageNet; popularised ReLU, dropout, GPU training 8 layers
    VGGNet 2014 Showed depth matters; used only $ 3 \times 3 $ filters throughout 16–19 layers
    GoogLeNet (Inception) 2014 Introduced inception modules with parallel filter sizes 22 layers
    ResNet 2015 Introduced residual connections enabling very deep networks 50–152+ layers
    DenseNet 2017 Connected each layer to every subsequent layer via dense blocks 121–264 layers
    EfficientNet 2019 Compound scaling of depth, width, and resolution Variable