Translations:Transfer Learning/24/en
- Data augmentation complements transfer learning by artificially expanding the effective size of the target dataset.
- Learning rate warmup helps stabilise early training when fine-tuning large pretrained models.
- Early stopping on a validation set prevents overfitting during fine-tuning, especially with small datasets.
- Layer-wise learning rate decay assigns smaller rates to earlier (more general) layers and larger rates to later (more task-specific) layers.
- Intermediate task transfer — fine-tuning on a related intermediate task before the final target (e.g., NLI before sentiment analysis) can further improve results.