Normalized training loss vs. baselines (Facebook).
<p>DCOR reports both reconstruction-only and total (with RLC); baselines report reconstruction-only. Each curve is normalized as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0335135#pone.0335135.e243" target="_blank">Eq (29)</a> by di...
Salvato in:
| Autore principale: | |
|---|---|
| Altri autori: | , , |
| Pubblicazione: |
2025
|
| Soggetti: | |
| Tags: |
Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!
|
| Riassunto: | <p>DCOR reports both reconstruction-only and total (with RLC); baselines report reconstruction-only. Each curve is normalized as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0335135#pone.0335135.e243" target="_blank">Eq (29)</a> by dividing by its epoch-1 value and EMA-smoothed (exponential moving average) with , where the EMA is computed as with . This normalization enables fair visual comparison across methods with different objectives and scales; the plot therefore emphasizes relative convergence trends (shape and stability) rather than raw magnitudes. Consistent with DCOR’s design, RLC regularizes late-phase training: the reconstruction curve decreases more conservatively than methods that minimize reconstruction alone, while the total objective continues to decrease.</p> |
|---|