Personalized Federated Learning With Adaptive Transformer Pruning and Hypernetwork-Driven Personalization in Wireless Networks

Research output: Contribution to journalArticlepeer-review

Abstract

Deploying transformer models in Personalized Federated Learning (PFL) at the wireless edge faces critical challenges, including high communication overhead, latency, and energy consumption. Existing compression methods, such as pruning and sparsification, typically degrade performance due to the sensitivity of self-attention layers (SALs) to parameter reduction. Also, standard federated averaging (FedAvg) often diminishes personalization by blending crucial client-specific parameters. To overcome these issues, we propose PFL-TPP (Personalized Federated Learning with Transformer Pruning and Personalization). This dual-strategy framework effectively reduces computational and communication burdens while maintaining high model accuracy and personalization. Our approach employs dynamic, learnable threshold pruning on feed-forward layers (FFLs) to eliminate redundant computations. For SALs, we introduce a novel server-side hypernetwork that generates personalized attention parameters from client-specific embeddings, significantly cutting communication overhead without sacrificing personalization. Extensive experiments demonstrate that PFL-TPP achieves up to 82.73% energy savings, 86% reduction in training time, and improved model accuracy compared to standard baselines. These results demonstrate the effectiveness of our proposed approach in enabling scalable, communication-efficient deployment of transformers in real-world PFL scenarios.

Original languageEnglish
Pages (from-to)1-16
Number of pages16
JournalIEEE Transactions on Machine Learning in Communications and Networking
Volume4
DOIs
Publication statusPublished - 2026

Keywords

  • Accuracy
  • Adaptation models
  • Computational modeling
  • Data models
  • Data privacy
  • Federated learning
  • Hypernetwork
  • Learnable thresholds
  • Personalized federated learning (PFL)
  • Personalized sparse models
  • Pruning
  • Resource optimization
  • Standards
  • Training
  • Transformers
  • Wireless networks

Fingerprint

Dive into the research topics of 'Personalized Federated Learning With Adaptive Transformer Pruning and Hypernetwork-Driven Personalization in Wireless Networks'. Together they form a unique fingerprint.

Cite this