All translations
Enter a message name below to show all available translations.
Found 3 translations.
| Name | Current message text |
|---|---|
| h English (en) | converges almost surely to the {{Term|global minimum}} (Robbins–Monro conditions). For non-convex problems — the typical regime for {{Term|deep learning}} — SGD converges to a stationary point, and empirical {{Term|marginal likelihood|evidence}} shows it often finds good {{Term|local minimum|local minima}}. |
| h Spanish (es) | converge casi seguramente al {{Term|global minimum|mínimo global}} (condiciones de Robbins–Monro). Para problemas no convexos — el régimen habitual del {{Term|deep learning|aprendizaje profundo}} — el SGD converge a un punto estacionario, y la {{Term|marginal likelihood|evidencia}} empírica muestra que con frecuencia encuentra buenos {{Term|local minimum|mínimos locales}}. |
| h Chinese (zh) | 几乎必然收敛到{{Term|global minimum|全局最小值}}(Robbins–Monro 条件)。对于非凸问题——{{Term|deep learning|深度学习}}的典型情形——SGD 收敛到一个驻点,经验{{Term|marginal likelihood|证据}}表明它通常能找到良好的{{Term|local minimum|局部最小值}}。 |