Translations:Transfer Learning/15/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    Tag: Manual revert
    (Importing a new version from external source)
    Tag: Manual revert
     
    Line 1: Line 1:
    ImageNet-pretrained convolutional networks (ResNet, EfficientNet, ViT) serve as standard backbones. Lower layers learn universal features such as edges and textures, while higher layers learn task-specific patterns. Fine-tuning an ImageNet model on a medical imaging dataset with only a few thousand images routinely outperforms training from scratch.
    ImageNet-pretrained convolutional networks (ResNet, EfficientNet, ViT) serve as standard backbones. Lower layers learn universal features such as edges and textures, while higher layers learn task-specific patterns. {{Term|fine-tuning}} an ImageNet model on a medical imaging dataset with only a few thousand images routinely outperforms training from scratch.

    Latest revision as of 23:34, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Transfer Learning)
    ImageNet-pretrained convolutional networks (ResNet, EfficientNet, ViT) serve as standard backbones. Lower layers learn universal features such as edges and textures, while higher layers learn task-specific patterns. {{Term|fine-tuning}} an ImageNet model on a medical imaging dataset with only a few thousand images routinely outperforms training from scratch.

    ImageNet-pretrained convolutional networks (ResNet, EfficientNet, ViT) serve as standard backbones. Lower layers learn universal features such as edges and textures, while higher layers learn task-specific patterns. fine-tuning an ImageNet model on a medical imaging dataset with only a few thousand images routinely outperforms training from scratch.