Translations:Convolutional Neural Networks/31/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    Tag: Manual revert
    (Importing a new version from external source)
    Tag: Manual revert
     
    Line 1: Line 1:
    * Use pretrained models (transfer learning) when labelled data is limited.
    * Use pretrained models (transfer learning) when labelled data is limited.
    * Prefer small kernels (<math>3 \times 3</math>) stacked in depth — two <math>3 \times 3</math> layers have the same receptive field as one <math>5 \times 5</math> layer but with fewer parameters.
    * Prefer small kernels (<math>3 \times 3</math>) stacked in depth — two <math>3 \times 3</math> layers have the same receptive field as one <math>5 \times 5</math> layer but with fewer parameters.
    * Apply batch normalisation after convolution and before activation.
    * Apply {{Term|batch normalization|batch normalisation}} after convolution and before {{Term|activation function|activation}}.
    * Use data augmentation generously to reduce [[Overfitting and Regularization|overfitting]].
    * Use data augmentation generously to reduce [[Overfitting and Regularization|overfitting]].
    * Replace fully connected layers with global average pooling to reduce parameters.
    * Replace fully connected layers with global average {{Term|pooling}} to reduce parameters.

    Latest revision as of 23:34, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Convolutional Neural Networks)
    * Use pretrained models (transfer learning) when labelled data is limited.
    * Prefer small kernels (<math>3 \times 3</math>) stacked in depth — two <math>3 \times 3</math> layers have the same receptive field as one <math>5 \times 5</math> layer but with fewer parameters.
    * Apply {{Term|batch normalization|batch normalisation}} after convolution and before {{Term|activation function|activation}}.
    * Use data augmentation generously to reduce [[Overfitting and Regularization|overfitting]].
    * Replace fully connected layers with global average {{Term|pooling}} to reduce parameters.
    • Use pretrained models (transfer learning) when labelled data is limited.
    • Prefer small kernels ($ 3 \times 3 $) stacked in depth — two $ 3 \times 3 $ layers have the same receptive field as one $ 5 \times 5 $ layer but with fewer parameters.
    • Apply batch normalisation after convolution and before activation.
    • Use data augmentation generously to reduce overfitting.
    • Replace fully connected layers with global average pooling to reduce parameters.