Translations:Attention Mechanisms/7/en: Difference between revisions
(Importing a new version from external source) |
(Importing a new version from external source) |
||
| Line 1: | Line 1: | ||
where <math>W_s</math>, <math>W_h</math>, and <math>v</math> are learned parameters. The attention weights are obtained by applying softmax: | where <math>W_s</math>, <math>W_h</math>, and <math>v</math> are learned parameters. The attention weights are obtained by applying {{Term|softmax}}: | ||
Revision as of 19:41, 27 April 2026
where $ W_s $, $ W_h $, and $ v $ are learned parameters. The attention weights are obtained by applying softmax: