Translations:Stochastic Gradient Descent/1/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) |
||
| Line 1: | Line 1: | ||
'''Stochastic gradient descent''' (often abbreviated '''{{Term|SGD|SGD}}''') is an iterative optimisation algorithm used to minimise an {{Term|objective function|objective function}} written as a sum of differentiable sub-functions. It is the workhorse behind modern machine-learning training, powering everything from logistic regression to deep neural networks. | '''Stochastic {{Term|gradient descent}}''' (often abbreviated '''{{Term|SGD|SGD}}''') is an iterative optimisation algorithm used to minimise an {{Term|objective function|objective function}} written as a sum of differentiable sub-functions. It is the workhorse behind modern machine-learning training, powering everything from {{Term|logistic regression}} to deep neural networks. | ||
Latest revision as of 19:42, 27 April 2026
Stochastic gradient descent (often abbreviated SGD) is an iterative optimisation algorithm used to minimise an objective function written as a sum of differentiable sub-functions. It is the workhorse behind modern machine-learning training, powering everything from logistic regression to deep neural networks.