All translations

Enter a message name below to show all available translations.

Message

Found 3 translations.

NameCurrent message text
 h English (en)'''{{Term|dropout}}''' was applied to the outputs of the first two fully connected layers during training, randomly setting each neuron's output to zero with probability 0.5. This approximately doubled the number of iterations required to converge, but substantially reduced {{Term|overfitting}}.
 h Spanish (es)Durante el entrenamiento se aplicó '''{{Term|dropout|dropout}}''' a las salidas de las dos primeras capas totalmente conectadas, poniendo a cero la salida de cada neurona con una probabilidad de 0.5. Esto aproximadamente duplicó el número de iteraciones requeridas para converger, pero redujo sustancialmente el {{Term|overfitting|sobreajuste}}.
 h Chinese (zh)训练过程中对前两个全连接层的输出应用了 '''{{Term|dropout|dropout}}''',以 0.5 的概率随机将每个神经元的输出置零。这使得收敛所需的迭代次数大约翻倍,但显著减少了{{Term|overfitting|过拟合}}。