Translations:Deep Residual Learning for Image Recognition/4/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
     
    Line 1: Line 1:
    As neural networks grew deeper in the mid-2010s, researchers observed a counterintuitive '''degradation problem''': adding more layers to a network eventually caused training accuracy to degrade, not from overfitting but from optimization difficulty. A 56-layer plain network performed worse than a 20-layer network on both training and test sets, indicating that deeper networks were harder to optimize rather than simply more prone to overfitting.
    As neural networks grew deeper in the mid-2010s, researchers observed a counterintuitive '''degradation problem''': adding more layers to a network eventually caused training accuracy to degrade, not from {{Term|overfitting}} but from optimization difficulty. A 56-layer plain network performed worse than a 20-layer network on both training and test sets, indicating that deeper networks were harder to optimize rather than simply more prone to {{Term|overfitting}}.

    Latest revision as of 21:37, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Deep Residual Learning for Image Recognition)
    As neural networks grew deeper in the mid-2010s, researchers observed a counterintuitive '''degradation problem''': adding more layers to a network eventually caused training accuracy to degrade, not from {{Term|overfitting}} but from optimization difficulty. A 56-layer plain network performed worse than a 20-layer network on both training and test sets, indicating that deeper networks were harder to optimize rather than simply more prone to {{Term|overfitting}}.

    As neural networks grew deeper in the mid-2010s, researchers observed a counterintuitive degradation problem: adding more layers to a network eventually caused training accuracy to degrade, not from overfitting but from optimization difficulty. A 56-layer plain network performed worse than a 20-layer network on both training and test sets, indicating that deeper networks were harder to optimize rather than simply more prone to overfitting.