LLMs Have Rhythm: Fingerprinting Large Language Models Using Inter-Token Times and Network Traffic Analysis

Saeif Alhazbi*, Ahmed Hussain, Gabriele Oligeri, Panos Papadimitratos

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

As Large Language Models (LLMs) become increasingly integrated into many technological ecosystems across various domains and industries, identifying which model is deployed or being interacted with is critical for the security and trustworthiness of the systems. Current verification methods typically rely on analyzing the generated output to determine the source model. However, these techniques are susceptible to adversarial attacks, operate in a post-hoc manner, and may require access to the model weights to inject a verifiable fingerprint. In this paper, we propose a novel passive fingerprinting framework that operates in real-time and remains effective even under encrypted network traffic conditions. Our method leverages the intrinsic autoregressive generation nature of language models, which generate text one token at a time based on all previously generated tokens, creating a unique temporal pattern-like a rhythm or heartbeat-that persists even when the output is streamed over a network. We find that measuring the Inter-Token Times (ITTs)–time intervals between consecutive tokens-can identify different language models with high accuracy. We develop a Deep Learning (DL) pipeline to capture these timing patterns using network traffic analysis and evaluate it on 16 Small Language Models (SLMs) and 10 proprietary LLMs across different deployment scenarios, including local host machine (GPU/CPU), Local Area Network (LAN), Remote Network, and when using Virtual Private Network (VPN). Our experimental results demonstrate high classification performance with weighted F1-scores of 85% when tested on a different day, 74% across different networks, and 71% when traffic is tunneled through a VPN connection. This work opens a new avenue for model identification in real-world scenarios and contributes to more secure and trustworthy language model deployment.

Original languageEnglish
Pages (from-to)5050-5071
Number of pages22
JournalIEEE Open Journal of the Communications Society
Volume6
DOIs
Publication statusPublished - 5 Jun 2025

Keywords

  • Analytical models
  • Computational modeling
  • Deep learning
  • Feature extraction
  • Fingerprint recognition
  • Fingerprinting
  • Large language models
  • Local area networks
  • Network security
  • Network traffic analysis
  • Small language models
  • Telecommunication traffic
  • Timing
  • Virtual private networks
  • Watermarking

Fingerprint

Dive into the research topics of 'LLMs Have Rhythm: Fingerprinting Large Language Models Using Inter-Token Times and Network Traffic Analysis'. Together they form a unique fingerprint.

Cite this