Translations:Stochastic Gradient Descent/22/en

    From Marovi AI

    converges almost surely to the global minimum (Robbins–Monro conditions). For non-convex problems — the typical regime for deep learning — SGD converges to a stationary point, and empirical evidence shows it often finds good local minima.