<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://marovi.ai/index.php?action=history&amp;feed=atom&amp;title=Translations%3AStochastic_Gradient_Descent%2F27%2Fzh</id>
	<title>Translations:Stochastic Gradient Descent/27/zh - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://marovi.ai/index.php?action=history&amp;feed=atom&amp;title=Translations%3AStochastic_Gradient_Descent%2F27%2Fzh"/>
	<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Translations:Stochastic_Gradient_Descent/27/zh&amp;action=history"/>
	<updated>2026-04-27T22:02:11Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>https://marovi.ai/index.php?title=Translations:Stochastic_Gradient_Descent/27/zh&amp;diff=5474&amp;oldid=prev</id>
		<title>DeployBot: Batch translate Stochastic Gradient Descent unit 27 → zh</title>
		<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Translations:Stochastic_Gradient_Descent/27/zh&amp;diff=5474&amp;oldid=prev"/>
		<updated>2026-04-27T03:38:16Z</updated>

		<summary type="html">&lt;p&gt;Batch translate Stochastic Gradient Descent unit 27 → zh&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;数据洗牌&amp;#039;&amp;#039;&amp;#039; —— 在每个 epoch 重新打乱数据集，避免出现循环模式。&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;{{Term|gradient clipping|梯度裁剪}}&amp;#039;&amp;#039;&amp;#039; —— 对梯度范数进行截断，以防止更新爆炸，尤其是在循环神经网络中。&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;{{Term|batch normalization|批归一化}}&amp;#039;&amp;#039;&amp;#039; —— 对层输入进行归一化可降低对{{Term|learning rate|学习率}}的敏感度。&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;混合精度训练&amp;#039;&amp;#039;&amp;#039; —— 使用半精度浮点数能在现代 GPU 上加速 SGD，同时几乎不损失精度。&lt;/div&gt;</summary>
		<author><name>DeployBot</name></author>
	</entry>
</feed>