Translations:Neural Networks/24/en: Difference between revisions
(Importing a new version from external source) Tag: Manual revert |
(Importing a new version from external source) Tag: Manual revert |
||
| Line 1: | Line 1: | ||
# '''Defining a loss function''' — a measure of how far the network's predictions are from the true targets (see [[Loss Functions]]). | # '''Defining a {{Term|loss function}}''' — a measure of how far the network's predictions are from the true targets (see [[Loss Functions]]). | ||
# '''Forward pass''' — computing the output of the network for a given input by propagating values layer by layer. | # '''Forward pass''' — computing the output of the network for a given input by propagating values layer by layer. | ||
# '''Backward pass (backpropagation)''' — computing the gradient of the loss with respect to every weight by applying the chain rule in reverse through the network (see [[Backpropagation]]). | # '''Backward pass ({{Term|backpropagation}})''' — computing the gradient of the loss with respect to every weight by applying the chain rule in reverse through the network (see [[Backpropagation]]). | ||
# '''Parameter update''' — adjusting the weights using an optimisation algorithm such as [[Gradient Descent]] or one of its variants. | # '''Parameter update''' — adjusting the weights using an optimisation algorithm such as [[Gradient Descent]] or one of its variants. | ||
# '''Iteration''' — repeating steps 2–4 over many passes (epochs) through the training data. | # '''Iteration''' — repeating steps 2–4 over many passes ({{Term|epoch|epochs}}) through the training data. | ||
Latest revision as of 23:34, 27 April 2026
- Defining a loss function — a measure of how far the network's predictions are from the true targets (see Loss Functions).
- Forward pass — computing the output of the network for a given input by propagating values layer by layer.
- Backward pass (backpropagation) — computing the gradient of the loss with respect to every weight by applying the chain rule in reverse through the network (see Backpropagation).
- Parameter update — adjusting the weights using an optimisation algorithm such as Gradient Descent or one of its variants.
- Iteration — repeating steps 2–4 over many passes (epochs) through the training data.