Skip to main navigation Skip to search Skip to main content

Novel Loss Functions for Improved Data Visualization in t-SNE

  • Hamad bin Khalifa University
  • Bishop's University

Research output: Contribution to journalArticlepeer-review

Abstract

A popular method for projecting high-dimensional data onto a lower-dimensional space while preserving the integrity of its structure is t-distributed Stochastic Neighbor Embedding (t-SNE). This technique minimizes the Kullback–Leibler ((Formula presented.)) divergence to align the similarities between points in the original and reduced spaces. While t-SNE is highly effective, it prioritizes local neighborhood preservation, which results in limited separation between distant clusters and inadequate representation of global relationships. To improve these limitations, this work introduces two complementary approaches: (1) The Max-Flipped (Formula presented.) Divergence ((Formula presented.)) modifies the original divergence by incorporating a contrastive term, (Formula presented.), which enhances the ranking of point similarities through maximum similarity constraints. (2) The (Formula presented.) -Wasserstein Loss ((Formula presented.)) combines the (Formula presented.) divergence with the classic Wasserstein distance, allowing the embedding to benefit from the smooth and geometry-aware transport properties of Wasserstein metrics. Experimental results show that these methods lead to improved separation and better structural clarity in the low-dimensional space compared to standard t-SNE.

Original languageEnglish
Article number47
JournalMachine Learning and Knowledge Extraction
Volume8
Issue number2
DOIs
Publication statusPublished - Feb 2026

Keywords

  • Kullback–leibler divergence
  • Wasserstein distance
  • loss functions
  • t-SNE
  • visualization

Fingerprint

Dive into the research topics of 'Novel Loss Functions for Improved Data Visualization in t-SNE'. Together they form a unique fingerprint.

Cite this