Translations:Convolutional Neural Networks/22/zh: Difference between revisions

    From Marovi AI
    (Batch translate Convolutional Neural Networks unit 22 → zh)
    Tag: translation
    (Batch translate Convolutional Neural Networks unit 22 → zh)
    Tag: translation
     
    Line 1: Line 1:
    {| class="wikitable"
    {| class="wikitable"
    |-
    |-
    ! 架构 !! 年份 !! 关键贡献 !! 深度
    ! 架构 !! 年份 !! 主要贡献 !! 深度
    |-
    |-
    | '''LeNet-5''' || 1998 || 开创了用于手写数字识别(MNIST)的 CNN || 5 层
    | '''LeNet-5''' || 1998 || 开创性地将 CNN 用于手写数字识别(MNIST) || 5 层
    |-
    |-
    | '''AlexNet''' || 2012 || 赢得 ImageNet;推广了 ReLU、dropout 和 GPU 训练 || 8 层
    | '''AlexNet''' || 2012 || 赢得 ImageNet;推广了 ReLU、{{Term|dropout|dropout}} 和 GPU 训练 || 8 层
    |-
    |-
    | '''VGGNet''' || 2014 || 证明深度很重要;全程仅使用 <math>3 \times 3</math> 滤波器 || 16–19 层
    | '''VGGNet''' || 2014 || 表明深度很重要;全网络仅使用 <math>3 \times 3</math> 滤波器 || 16–19 层
    |-
    |-
    | '''GoogLeNet (Inception)''' || 2014 || 引入了具有并行滤波器尺寸的 inception 模块 || 22 层
    | '''GoogLeNet (Inception)''' || 2014 || 引入了具有并行滤波器尺寸的 inception 模块 || 22 层
    |-
    |-
    | '''ResNet''' || 2015 || 引入残差连接,使非常深的网络成为可能 || 50–152+ 层
    | '''ResNet''' || 2015 || 引入了残差连接,使非常深的网络成为可能 || 50–152+ 层
    |-
    |-
    | '''DenseNet''' || 2017 || 通过密集块将每一层与所有后续层相连接 || 121–264 层
    | '''DenseNet''' || 2017 || 通过密集块将每一层连接到后续每一层 || 121–264 层
    |-
    |-
    | '''EfficientNet''' || 2019 || 对深度、宽度和分辨率进行复合缩放 || 可变
    | '''EfficientNet''' || 2019 || 对深度、宽度和分辨率进行复合缩放 || 可变
    |}
    |}

    Latest revision as of 23:37, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Convolutional Neural Networks)
    {| class="wikitable"
    |-
    ! Architecture !! Year !! Key contribution !! Depth
    |-
    | '''LeNet-5''' || 1998 || Pioneered CNNs for handwritten digit recognition (MNIST) || 5 layers
    |-
    | '''AlexNet''' || 2012 || Won ImageNet; popularised ReLU, {{Term|dropout}}, GPU training || 8 layers
    |-
    | '''VGGNet''' || 2014 || Showed depth matters; used only <math>3 \times 3</math> filters throughout || 16–19 layers
    |-
    | '''GoogLeNet (Inception)''' || 2014 || Introduced inception modules with parallel filter sizes || 22 layers
    |-
    | '''ResNet''' || 2015 || Introduced residual connections enabling very deep networks || 50–152+ layers
    |-
    | '''DenseNet''' || 2017 || Connected each layer to every subsequent layer via dense blocks || 121–264 layers
    |-
    | '''EfficientNet''' || 2019 || Compound scaling of depth, width, and resolution || Variable
    |}
    架构 年份 主要贡献 深度
    LeNet-5 1998 开创性地将 CNN 用于手写数字识别(MNIST) 5 层
    AlexNet 2012 赢得 ImageNet;推广了 ReLU、dropout 和 GPU 训练 8 层
    VGGNet 2014 表明深度很重要;全网络仅使用 $ 3 \times 3 $ 滤波器 16–19 层
    GoogLeNet (Inception) 2014 引入了具有并行滤波器尺寸的 inception 模块 22 层
    ResNet 2015 引入了残差连接,使非常深的网络成为可能 50–152+ 层
    DenseNet 2017 通过密集块将每一层连接到后续每一层 121–264 层
    EfficientNet 2019 对深度、宽度和分辨率进行复合缩放 可变