Translations:Transfer Learning/24/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    (Importing a new version from external source)
    Tag: Manual revert
     
    (One intermediate revision by the same user not shown)
    (No difference)

    Latest revision as of 23:34, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Transfer Learning)
    * '''Data augmentation''' complements transfer learning by artificially expanding the effective size of the target dataset.
    * '''{{Term|learning rate}} warmup''' helps stabilise early training when {{Term|fine-tuning}} large pretrained models.
    * '''Early stopping''' on a validation set prevents {{Term|overfitting}} during {{Term|fine-tuning}}, especially with small datasets.
    * '''Layer-wise {{Term|learning rate}} decay''' assigns smaller rates to earlier (more general) layers and larger rates to later (more task-specific) layers.
    * '''Intermediate task transfer''' — {{Term|fine-tuning}} on a related intermediate task before the final target (e.g., NLI before sentiment analysis) can further improve results.
    • Data augmentation complements transfer learning by artificially expanding the effective size of the target dataset.
    • learning rate warmup helps stabilise early training when fine-tuning large pretrained models.
    • Early stopping on a validation set prevents overfitting during fine-tuning, especially with small datasets.
    • Layer-wise learning rate decay assigns smaller rates to earlier (more general) layers and larger rates to later (more task-specific) layers.
    • Intermediate task transferfine-tuning on a related intermediate task before the final target (e.g., NLI before sentiment analysis) can further improve results.