Translations:Overfitting and Regularization/14/en

    From Marovi AI

    The gradient of the regularization term is $ \lambda \theta $, so each weight is multiplicatively shrunk toward zero at every update — hence the name weight decay. The hyperparameter $ \lambda $ controls the regularization strength.