Translations:Convolutional Neural Networks/31/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) |
||
| Line 1: | Line 1: | ||
* Use pretrained models (transfer learning) when labelled data is limited. | * Use pretrained models (transfer learning) when labelled data is limited. | ||
* Prefer small kernels (<math>3 \times 3</math>) stacked in depth — two <math>3 \times 3</math> layers have the same receptive field as one <math>5 \times 5</math> layer but with fewer parameters. | * Prefer small kernels (<math>3 \times 3</math>) stacked in depth — two <math>3 \times 3</math> layers have the same receptive field as one <math>5 \times 5</math> layer but with fewer parameters. | ||
* Apply batch normalisation after convolution and before activation. | * Apply {{Term|batch normalization|batch normalisation}} after convolution and before {{Term|activation function|activation}}. | ||
* Use data augmentation generously to reduce [[Overfitting and Regularization|overfitting]]. | * Use data augmentation generously to reduce [[Overfitting and Regularization|overfitting]]. | ||
* Replace fully connected layers with global average pooling to reduce parameters. | * Replace fully connected layers with global average {{Term|pooling}} to reduce parameters. | ||
Revision as of 19:41, 27 April 2026
- Use pretrained models (transfer learning) when labelled data is limited.
- Prefer small kernels ($ 3 \times 3 $) stacked in depth — two $ 3 \times 3 $ layers have the same receptive field as one $ 5 \times 5 $ layer but with fewer parameters.
- Apply batch normalisation after convolution and before activation.
- Use data augmentation generously to reduce overfitting.
- Replace fully connected layers with global average pooling to reduce parameters.