All translations

Enter a message name below to show all available translations.

Message

Found 3 translations.

NameCurrent message text
 h English (en)Training an RNN requires computing gradients of the loss with respect to the shared weights. '''{{Term|backpropagation}} through time''' (BPTT) "unrolls" the RNN across time {{Term|training step|steps}}, producing a deep {{Term|feedforward neural network|feedforward network}} with shared weights, and then applies standard [[Backpropagation|backpropagation]].
 h Spanish (es)Entrenar una RNN requiere calcular gradientes de la pérdida con respecto a los pesos compartidos. La '''{{Term|backpropagation|retropropagación}} a través del tiempo''' (BPTT) "despliega" la RNN a lo largo de los {{Term|training step|pasos}} de tiempo, produciendo una {{Term|feedforward neural network|red feedforward}} profunda con pesos compartidos, y luego aplica la [[Backpropagation|retropropagación]] estándar.
 h Chinese (zh)训练 RNN 需要计算损失相对于共享权重的梯度。'''跨时间{{Term|backpropagation|反向传播}}'''(BPTT)将 RNN 沿时间{{Term|training step|步}}"展开",生成一个权重共享的深层{{Term|feedforward neural network|前馈网络}},然后应用标准的[[Backpropagation|反向传播]]。