Translations:Adam A Method for Stochastic Optimization/24/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) |
||
| Line 1: | Line 1: | ||
The paper provided a convergence analysis showing that Adam achieves an <math>O(\sqrt{T})</math> regret bound in the online convex optimization framework, matching the best known bounds for adaptive methods. | The paper provided a {{Term|convergence}} analysis showing that Adam achieves an <math>O(\sqrt{T})</math> regret bound in the online {{Term|convex optimization}} framework, matching the best known bounds for adaptive methods. | ||
Latest revision as of 21:37, 27 April 2026
The paper provided a convergence analysis showing that Adam achieves an $ O(\sqrt{T}) $ regret bound in the online convex optimization framework, matching the best known bounds for adaptive methods.