All translations

Enter a message name below to show all available translations.

Message

Found 3 translations.

NameCurrent message text
 h English (en)Deep neural networks with many parameters are powerful function approximators but are prone to {{Term|overfitting}}, especially when training data is limited. Traditional {{Term|regularization}} methods such as L2 {{Term|weight decay}} and early stopping provided some relief, but were often insufficient for large networks. Model combination — training multiple models and averaging their predictions — was known to reduce {{Term|overfitting}} but was computationally expensive.
 h Spanish (es)Las redes neuronales profundas con muchos parámetros son aproximadores de funciones potentes, pero son propensas al {{Term|overfitting|sobreajuste}}, especialmente cuando los datos de entrenamiento son limitados. Los métodos tradicionales de {{Term|regularization|regularización}}, como el {{Term|weight decay|decaimiento de pesos}} L2 y la parada temprana, proporcionaban cierto alivio, pero a menudo resultaban insuficientes para redes grandes. La combinación de modelos —entrenar múltiples modelos y promediar sus predicciones— era conocida por reducir el {{Term|overfitting|sobreajuste}}, pero resultaba computacionalmente costosa.
 h Chinese (zh)具有大量参数的深度神经网络是强大的函数逼近器,但容易出现{{Term|overfitting|过拟合}},尤其是在训练数据有限时。传统的{{Term|regularization|正则化}}方法,例如 L2 {{Term|weight decay|权重衰减}}和提前停止,可以提供一定缓解,但对于大型网络往往不够充分。模型组合——训练多个模型并对它们的预测取平均——已知可以减少{{Term|overfitting|过拟合}},但计算成本高昂。