Translations:Neural Networks/25/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
    (Importing a new version from external source)
    Tag: Manual revert
    Line 1: Line 1:
    Successful training also requires {{Term|attention}} to '''initialisation''' (e.g. Xavier or He schemes), '''{{Term|regularization|regularisation}}''' (to prevent [[Overfitting and Regularization|overfitting]]), and '''{{Term|hyperparameter}} tuning''' ({{Term|learning rate}}, batch size, network architecture).
    Successful training also requires attention to '''initialisation''' (e.g. Xavier or He schemes), '''regularisation''' (to prevent [[Overfitting and Regularization|overfitting]]), and '''hyperparameter tuning''' (learning rate, batch size, network architecture).

    Revision as of 22:01, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    Successful training also requires {{Term|attention}} to '''initialisation''' (e.g. Xavier or He schemes), '''{{Term|regularization|regularisation}}''' (to prevent [[Overfitting and Regularization|overfitting]]), and '''{{Term|hyperparameter}} tuning''' ({{Term|learning rate}}, batch size, network architecture).

    Successful training also requires attention to initialisation (e.g. Xavier or He schemes), regularisation (to prevent overfitting), and hyperparameter tuning (learning rate, batch size, network architecture).