All translations
Enter a message name below to show all available translations.
Found 3 translations.
| Name | Current message text |
|---|---|
| h English (en) | * '''Higher {{Term|learning rate|learning rates}}''': By constraining {{Term|activation function|activation}} distributions, BatchNorm allows larger {{Term|learning rate|step sizes}} without divergence. * '''Reduced {{Term|recall|sensitivity}} to initialization''': Networks with BatchNorm are more forgiving of poor weight initialization. * '''{{Term|regularization}} effect''': The noise introduced by {{Term|mini-batch}} statistics acts as a mild {{Term|regularization|regularizer}}, sometimes reducing the need for [[Dropout]]. * '''Faster {{Term|convergence}}''': Training typically requires fewer {{Term|epoch|epochs}} to reach a given level of performance. |
| h Spanish (es) | * '''{{Term|learning rate|Tasas de aprendizaje}} más altas''': Al restringir las distribuciones de {{Term|activation function|activación}}, BatchNorm permite {{Term|learning rate|tamaños de paso}} mayores sin divergencia. * '''Menor {{Term|recall|sensibilidad}} a la inicialización''': Las redes con BatchNorm son más tolerantes a una mala inicialización de pesos. * '''Efecto de {{Term|regularization|regularización}}''': El ruido introducido por las estadísticas del {{Term|mini-batch|mini-lote}} actúa como un suave {{Term|regularization|regularizador}}, reduciendo a veces la necesidad de [[Dropout]]. * '''{{Term|convergence|Convergencia}} más rápida''': El entrenamiento normalmente requiere menos {{Term|epoch|épocas}} para alcanzar un nivel dado de rendimiento. |
| h Chinese (zh) | * '''更高的 {{Term|learning rate|学习率}}''':通过约束 {{Term|activation function|激活}}分布,BatchNorm 允许更大的 {{Term|learning rate|步长}}而不发散。 * '''降低对初始化的 {{Term|recall|敏感性}}''':使用 BatchNorm 的网络对较差的权重初始化更具容忍度。 * '''{{Term|regularization|正则化}}效应''':{{Term|mini-batch|小批量}}统计量引入的噪声起到了轻微的 {{Term|regularization|正则化器}}作用,有时可以减少对 [[Dropout]] 的需求。 * '''更快的 {{Term|convergence|收敛}}''':训练通常需要更少的 {{Term|epoch|轮次}}就能达到给定的性能水平。 |