All translations
Enter a message name below to show all available translations.
Found 3 translations.
| Name | Current message text |
|---|---|
| h English (en) | At each {{Term|training step}}, every neuron in a dropout layer is independently retained with probability <math>p</math> (the '''keep probability''') or set to zero with probability <math>1 - p</math>. Formally, for a layer with {{Term|activation function|activation}} {{Term|vector}} <math>\mathbf{h}</math>: |
| h Spanish (es) | En cada {{Term|training step|paso de entrenamiento}}, cada neurona de una capa de dropout se mantiene de forma independiente con probabilidad <math>p</math> (la '''probabilidad de retención''') o se pone a cero con probabilidad <math>1 - p</math>. Formalmente, para una capa con un {{Term|vector|vector}} de {{Term|activation function|activación}} <math>\mathbf{h}</math>: |
| h Chinese (zh) | 在每个{{Term|training step|训练步}}中,dropout 层中的每个神经元独立地以概率 <math>p</math>('''保留概率''')被保留,或以概率 <math>1 - p</math> 被置零。形式上,对于具有{{Term|activation function|激活}}{{Term|vector|向量}} <math>\mathbf{h}</math> 的层: |