<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://marovi.ai/index.php?action=history&amp;feed=atom&amp;title=Cross-Entropy_Loss%2Fes</id>
	<title>Cross-Entropy Loss/es - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://marovi.ai/index.php?action=history&amp;feed=atom&amp;title=Cross-Entropy_Loss%2Fes"/>
	<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;action=history"/>
	<updated>2026-04-24T13:02:00Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2151&amp;oldid=prev</id>
		<title>DeployBot: [deploy-bot] Deploy from CI (8c92aeb)</title>
		<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2151&amp;oldid=prev"/>
		<updated>2026-04-24T07:09:00Z</updated>

		<summary type="html">&lt;p&gt;[deploy-bot] Deploy from CI (8c92aeb)&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 07:09, 24 April 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l107&quot;&gt;Line 107:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 107:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Machine Learning]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Machine Learning]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Intermediate]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Intermediate]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;!--v1.2.0 cache-bust--&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;!-- pass 2 --&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff::1.12:old-2095:rev-2151 --&gt;
&lt;/table&gt;</summary>
		<author><name>DeployBot</name></author>
	</entry>
	<entry>
		<id>https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2095&amp;oldid=prev</id>
		<title>DeployBot: Pass 2 force re-parse</title>
		<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2095&amp;oldid=prev"/>
		<updated>2026-04-24T07:00:35Z</updated>

		<summary type="html">&lt;p&gt;Pass 2 force re-parse&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 07:00, 24 April 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l108&quot;&gt;Line 108:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 108:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Intermediate]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Intermediate]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;!--v1.2.0 cache-bust--&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;!--v1.2.0 cache-bust--&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;!-- pass 2 --&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff::1.12:old-2058:rev-2095 --&gt;
&lt;/table&gt;</summary>
		<author><name>DeployBot</name></author>
	</entry>
	<entry>
		<id>https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2058&amp;oldid=prev</id>
		<title>DeployBot: Force re-parse after Math source-mode rollout (v1.2.0)</title>
		<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2058&amp;oldid=prev"/>
		<updated>2026-04-24T06:57:59Z</updated>

		<summary type="html">&lt;p&gt;Force re-parse after Math source-mode rollout (v1.2.0)&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 06:57, 24 April 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l107&quot;&gt;Line 107:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 107:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Machine Learning]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Machine Learning]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Intermediate]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Intermediate]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;!--v1.2.0 cache-bust--&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff::1.12:old-2000:rev-2058 --&gt;
&lt;/table&gt;</summary>
		<author><name>DeployBot</name></author>
	</entry>
	<entry>
		<id>https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2000&amp;oldid=prev</id>
		<title>DeployBot: [deploy-bot] Deploy from CI (775ba6e)</title>
		<link rel="alternate" type="text/html" href="https://marovi.ai/index.php?title=Cross-Entropy_Loss/es&amp;diff=2000&amp;oldid=prev"/>
		<updated>2026-04-24T04:01:47Z</updated>

		<summary type="html">&lt;p&gt;[deploy-bot] Deploy from CI (775ba6e)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{LanguageBar | page = Cross-Entropy Loss}}&lt;br /&gt;
{{ArticleInfobox | topic_area = Machine Learning | difficulty = Intermediate | prerequisites = [[Loss Functions]], [[Softmax Function]]}}&lt;br /&gt;
{{ContentMeta | generated_by = claude-opus | model_used = claude-opus-4-6 | generated_date = 2026-03-13}}&lt;br /&gt;
&lt;br /&gt;
La &amp;#039;&amp;#039;&amp;#039;perdida de entropia cruzada&amp;#039;&amp;#039;&amp;#039; (tambien llamada &amp;#039;&amp;#039;&amp;#039;perdida logaritmica&amp;#039;&amp;#039;&amp;#039;) es la funcion de perdida mas ampliamente utilizada para tareas de clasificacion en el aprendizaje automatico. Con raices en la teoria de la informacion, mide la disimilitud entre la distribucion de la etiqueta verdadera y la distribucion de probabilidad predicha por el modelo, proporcionando un objetivo suave y diferenciable que impulsa a los clasificadores probabilisticos hacia predicciones correctas y con alta confianza.&lt;br /&gt;
&lt;br /&gt;
== Fundamentos de la teoria de la informacion ==&lt;br /&gt;
&lt;br /&gt;
=== Entropia ===&lt;br /&gt;
&lt;br /&gt;
La &amp;#039;&amp;#039;&amp;#039;entropia&amp;#039;&amp;#039;&amp;#039; de una distribucion de probabilidad discreta &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; cuantifica su incertidumbre:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;H(p) = -\sum_{k=1}^{K} p_k \log p_k&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para una distribucion determinista (etiqueta one-hot), &amp;lt;math&amp;gt;H(p) = 0&amp;lt;/math&amp;gt;. La entropia se maximiza cuando todos los resultados son igualmente probables.&lt;br /&gt;
&lt;br /&gt;
=== Divergencia KL ===&lt;br /&gt;
&lt;br /&gt;
La &amp;#039;&amp;#039;&amp;#039;divergencia de Kullback-Leibler&amp;#039;&amp;#039;&amp;#039; mide cuanto difiere una distribucion &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; de una distribucion de referencia &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;D_{\mathrm{KL}}(p \,\|\, q) = \sum_{k=1}^{K} p_k \log \frac{p_k}{q_k}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
La divergencia KL es no negativa e igual a cero si y solo si &amp;lt;math&amp;gt;p = q&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Entropia cruzada ===&lt;br /&gt;
&lt;br /&gt;
La &amp;#039;&amp;#039;&amp;#039;entropia cruzada&amp;#039;&amp;#039;&amp;#039; entre las distribuciones &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; (verdadera) y &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; (predicha) es:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;H(p, q) = -\sum_{k=1}^{K} p_k \log q_k = H(p) + D_{\mathrm{KL}}(p \,\|\, q)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dado que &amp;lt;math&amp;gt;H(p)&amp;lt;/math&amp;gt; es constante con respecto a los parametros del modelo, minimizar la entropia cruzada es equivalente a minimizar la divergencia KL — es decir, hacer que la distribucion predicha &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; sea lo mas cercana posible a la distribucion verdadera &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Entropia cruzada binaria ==&lt;br /&gt;
&lt;br /&gt;
Para clasificacion binaria con etiqueta verdadera &amp;lt;math&amp;gt;y \in \{0, 1\}&amp;lt;/math&amp;gt; y probabilidad predicha &amp;lt;math&amp;gt;\hat{y} = \sigma(z)&amp;lt;/math&amp;gt; (donde &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt; es la [[Softmax Function|funcion sigmoide]]):&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathcal{L}_{\mathrm{BCE}} = -\bigl[y \log \hat{y} + (1 - y) \log(1 - \hat{y})\bigr]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sobre un conjunto de datos de &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; muestras:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathcal{L} = -\frac{1}{N} \sum_{i=1}^{N} \bigl[y_i \log \hat{y}_i + (1 - y_i) \log(1 - \hat{y}_i)\bigr]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
El gradiente con respecto al logit &amp;lt;math&amp;gt;z&amp;lt;/math&amp;gt; toma la forma elegantemente simple &amp;lt;math&amp;gt;\hat{y} - y&amp;lt;/math&amp;gt;, que es tanto intuitiva como computacionalmente eficiente.&lt;br /&gt;
&lt;br /&gt;
== Entropia cruzada categorica ==&lt;br /&gt;
&lt;br /&gt;
Para clasificacion multiclase con &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; clases, la etiqueta verdadera es tipicamente un vector one-hot &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; con &amp;lt;math&amp;gt;y_c = 1&amp;lt;/math&amp;gt; para la clase correcta &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt;. Las probabilidades predichas &amp;lt;math&amp;gt;\hat{\mathbf{y}}&amp;lt;/math&amp;gt; se obtienen mediante la [[Softmax Function|funcion softmax]]:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathcal{L}_{\mathrm{CE}} = -\sum_{k=1}^{K} y_k \log \hat{y}_k = -\log \hat{y}_c&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Esto se reduce a la probabilidad logaritmica negativa de la clase correcta, razon por la cual la entropia cruzada categorica tambien se denomina &amp;#039;&amp;#039;&amp;#039;verosimilitud logaritmica negativa&amp;#039;&amp;#039;&amp;#039; en este contexto.&lt;br /&gt;
&lt;br /&gt;
== Estabilidad numerica ==&lt;br /&gt;
&lt;br /&gt;
=== El truco log-sum-exp ===&lt;br /&gt;
&lt;br /&gt;
Calcular ingenuamente &amp;lt;math&amp;gt;\log(\mathrm{softmax}(z_k))&amp;lt;/math&amp;gt; implica exponenciar logits potencialmente grandes, causando desbordamiento. El truco &amp;#039;&amp;#039;&amp;#039;log-sum-exp&amp;#039;&amp;#039;&amp;#039; evita esto:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\log \hat{y}_k = z_k - \log \sum_{j=1}^{K} e^{z_j} = z_k - \left(m + \log \sum_{j=1}^{K} e^{z_j - m}\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
donde &amp;lt;math&amp;gt;m = \max_j z_j&amp;lt;/math&amp;gt;. Restar el logit maximo asegura que el exponente mas grande sea cero, previniendo el desbordamiento. Todos los principales frameworks de aprendizaje profundo implementan esta operacion fusionada (por ejemplo, &amp;lt;code&amp;gt;CrossEntropyLoss&amp;lt;/code&amp;gt; de PyTorch acepta logits crudos).&lt;br /&gt;
&lt;br /&gt;
=== Recorte ===&lt;br /&gt;
&lt;br /&gt;
Las probabilidades predichas deben recortarse lejos de exactamente 0 y 1 para evitar &amp;lt;math&amp;gt;\log(0) = -\infty&amp;lt;/math&amp;gt;. Tipicamente se utiliza un epsilon pequeno (por ejemplo, &amp;lt;math&amp;gt;10^{-7}&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Suavizado de etiquetas ==&lt;br /&gt;
&lt;br /&gt;
El &amp;#039;&amp;#039;&amp;#039;suavizado de etiquetas&amp;#039;&amp;#039;&amp;#039; (Szegedy et al., 2016) reemplaza el objetivo one-hot rigido con una distribucion suave:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;y_k^{\mathrm{smooth}} = (1 - \alpha)\, y_k + \frac{\alpha}{K}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
donde &amp;lt;math&amp;gt;\alpha&amp;lt;/math&amp;gt; es una constante pequena (comunmente 0.1). Esto evita que el modelo se vuelva excesivamente confiado, mejora la calibracion y a menudo produce una mejor generalizacion. Es practica estandar en el entrenamiento de grandes clasificadores de imagenes y modelos Transformer.&lt;br /&gt;
&lt;br /&gt;
== Comparacion con otras perdidas ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Perdida !! Formula !! Uso tipico&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Entropia cruzada&amp;#039;&amp;#039;&amp;#039; || &amp;lt;math&amp;gt;-\sum y_k \log \hat{y}_k&amp;lt;/math&amp;gt; || Clasificacion&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Error cuadratico medio&amp;#039;&amp;#039;&amp;#039; || &amp;lt;math&amp;gt;\frac{1}{K}\sum(y_k - \hat{y}_k)^2&amp;lt;/math&amp;gt; || Regresion (inadecuado para clasificacion)&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Perdida de bisagra&amp;#039;&amp;#039;&amp;#039; || &amp;lt;math&amp;gt;\max(0, 1 - y \cdot z)&amp;lt;/math&amp;gt; || Clasificacion tipo SVM&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Perdida focal&amp;#039;&amp;#039;&amp;#039; || &amp;lt;math&amp;gt;-(1-\hat{y}_c)^\gamma \log \hat{y}_c&amp;lt;/math&amp;gt; || Clasificacion desbalanceada&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
La entropia cruzada tiene gradientes mas pronunciados que el MSE cuando la prediccion es confidencialmente erronea, lo que conduce a una correccion mas rapida de los errores grandes.&lt;br /&gt;
&lt;br /&gt;
== Vease tambien ==&lt;br /&gt;
&lt;br /&gt;
* [[Loss Functions]]&lt;br /&gt;
* [[Softmax Function]]&lt;br /&gt;
* [[Logistic regression]]&lt;br /&gt;
* [[Information theory]]&lt;br /&gt;
* [[Neural Networks]]&lt;br /&gt;
&lt;br /&gt;
== Referencias ==&lt;br /&gt;
&lt;br /&gt;
* Shannon, C. E. (1948). &amp;quot;A Mathematical Theory of Communication&amp;quot;. &amp;#039;&amp;#039;Bell System Technical Journal&amp;#039;&amp;#039;.&lt;br /&gt;
* Goodfellow, I., Bengio, Y. and Courville, A. (2016). &amp;#039;&amp;#039;Deep Learning&amp;#039;&amp;#039;. MIT Press, Chapter 6.&lt;br /&gt;
* Szegedy, C. et al. (2016). &amp;quot;Rethinking the Inception Architecture for Computer Vision&amp;quot;. &amp;#039;&amp;#039;CVPR&amp;#039;&amp;#039;.&lt;br /&gt;
* Lin, T.-Y. et al. (2017). &amp;quot;Focal Loss for Dense Object Detection&amp;quot;. &amp;#039;&amp;#039;ICCV&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Machine Learning]]&lt;br /&gt;
[[Category:Intermediate]]&lt;/div&gt;</summary>
		<author><name>DeployBot</name></author>
	</entry>
</feed>