All translations

Enter a message name below to show all available translations.

Message

Found 3 translations.

NameCurrent message text
 h English (en)# A neural network produces raw {{Term|logits}} <math>\mathbf{z}</math> from its final linear layer.
# Softmax converts {{Term|logits}} to probabilities: <math>\hat{\mathbf{y}} = \sigma(\mathbf{z})</math>.
# The predicted class is <math>\hat{c} = \arg\max_k \hat{y}_k</math>.
# Training uses [[Cross-Entropy Loss]] applied to the predicted distribution and the true labels.
 h Spanish (es)# Una red neuronal produce {{Term|logits}} brutos <math>\mathbf{z}</math> a partir de su capa lineal final.
# Softmax convierte los {{Term|logits}} en probabilidades: <math>\hat{\mathbf{y}} = \sigma(\mathbf{z})</math>.
# La clase predicha es <math>\hat{c} = \arg\max_k \hat{y}_k</math>.
# El entrenamiento usa [[Cross-Entropy Loss]] aplicada a la distribución predicha y a las etiquetas verdaderas.
 h Chinese (zh)# 神经网络通过其最后的线性层产生原始的 {{Term|logits}} <math>\mathbf{z}</math>。
# Softmax 将 {{Term|logits}} 转换为概率:<math>\hat{\mathbf{y}} = \sigma(\mathbf{z})</math>。
# 预测的类别为 <math>\hat{c} = \arg\max_k \hat{y}_k</math>。
# 训练使用应用于预测分布与真实标签的 [[Cross-Entropy Loss]]。