Translations:Convolutional Neural Networks/22/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) Tag: Manual revert |
||
| Line 5: | Line 5: | ||
| '''LeNet-5''' || 1998 || Pioneered CNNs for handwritten digit recognition (MNIST) || 5 layers | | '''LeNet-5''' || 1998 || Pioneered CNNs for handwritten digit recognition (MNIST) || 5 layers | ||
|- | |- | ||
| '''AlexNet''' || 2012 || Won ImageNet; popularised ReLU, | | '''AlexNet''' || 2012 || Won ImageNet; popularised ReLU, dropout, GPU training || 8 layers | ||
|- | |- | ||
| '''VGGNet''' || 2014 || Showed depth matters; used only <math>3 \times 3</math> filters throughout || 16–19 layers | | '''VGGNet''' || 2014 || Showed depth matters; used only <math>3 \times 3</math> filters throughout || 16–19 layers | ||
Revision as of 21:57, 27 April 2026
| Architecture | Year | Key contribution | Depth |
|---|---|---|---|
| LeNet-5 | 1998 | Pioneered CNNs for handwritten digit recognition (MNIST) | 5 layers |
| AlexNet | 2012 | Won ImageNet; popularised ReLU, dropout, GPU training | 8 layers |
| VGGNet | 2014 | Showed depth matters; used only $ 3 \times 3 $ filters throughout | 16–19 layers |
| GoogLeNet (Inception) | 2014 | Introduced inception modules with parallel filter sizes | 22 layers |
| ResNet | 2015 | Introduced residual connections enabling very deep networks | 50–152+ layers |
| DenseNet | 2017 | Connected each layer to every subsequent layer via dense blocks | 121–264 layers |
| EfficientNet | 2019 | Compound scaling of depth, width, and resolution | Variable |