All translations
Enter a message name below to show all available translations.
Found 3 translations.
| Name | Current message text |
|---|---|
| h English (en) | {| class="wikitable" |- ! Architecture !! Year !! Key contribution !! Depth |- | '''LeNet-5''' || 1998 || Pioneered CNNs for handwritten digit recognition (MNIST) || 5 layers |- | '''AlexNet''' || 2012 || Won ImageNet; popularised ReLU, dropout, GPU training || 8 layers |- | '''VGGNet''' || 2014 || Showed depth matters; used only <math>3 \times 3</math> filters throughout || 16–19 layers |- | '''GoogLeNet (Inception)''' || 2014 || Introduced inception modules with parallel filter sizes || 22 layers |- | '''ResNet''' || 2015 || Introduced residual connections enabling very deep networks || 50–152+ layers |- | '''DenseNet''' || 2017 || Connected each layer to every subsequent layer via dense blocks || 121–264 layers |- | '''EfficientNet''' || 2019 || Compound scaling of depth, width, and resolution || Variable |} |
| h Spanish (es) | {| class="wikitable" |- ! Arquitectura !! Año !! Contribución clave !! Profundidad |- | '''LeNet-5''' || 1998 || Pionera de las CNN para el reconocimiento de dígitos manuscritos (MNIST) || 5 capas |- | '''AlexNet''' || 2012 || Ganó ImageNet; popularizó ReLU, dropout y entrenamiento en GPU || 8 capas |- | '''VGGNet''' || 2014 || Mostró que la profundidad importa; usó únicamente filtros <math>3 \times 3</math> || 16–19 capas |- | '''GoogLeNet (Inception)''' || 2014 || Introdujo módulos inception con tamaños de filtro paralelos || 22 capas |- | '''ResNet''' || 2015 || Introdujo conexiones residuales que permiten redes muy profundas || 50–152+ capas |- | '''DenseNet''' || 2017 || Conectó cada capa con todas las capas posteriores mediante bloques densos || 121–264 capas |- | '''EfficientNet''' || 2019 || Escalado compuesto de profundidad, anchura y resolución || Variable |} |
| h Chinese (zh) | {| class="wikitable" |- ! 架构 !! 年份 !! 关键贡献 !! 深度 |- | '''LeNet-5''' || 1998 || 开创了用于手写数字识别(MNIST)的 CNN || 5 层 |- | '''AlexNet''' || 2012 || 赢得 ImageNet;推广了 ReLU、dropout 和 GPU 训练 || 8 层 |- | '''VGGNet''' || 2014 || 证明深度很重要;全程仅使用 <math>3 \times 3</math> 滤波器 || 16–19 层 |- | '''GoogLeNet (Inception)''' || 2014 || 引入了具有并行滤波器尺寸的 inception 模块 || 22 层 |- | '''ResNet''' || 2015 || 引入残差连接,使非常深的网络成为可能 || 50–152+ 层 |- | '''DenseNet''' || 2017 || 通过密集块将每一层与所有后续层相连接 || 121–264 层 |- | '''EfficientNet''' || 2019 || 对深度、宽度和分辨率进行复合缩放 || 可变 |} |