Toward AI-Native 6G: Unveiling Online Optimization and Deep Reinforcement Learning for Autonomous Network Slicing

Amr Abo-Eleneen*, Menna Helmy, Alaa Awad Abdellatif, Mohamed Abdallah, Amr Mohamed, Aiman Erbad

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The shift to AI-native 6G networks demands autonomous slicing strategies that can adapt to diverse and evolving edge and IoT service needs. Two paradigms have emerged: Learn to Slice (L2S), where AI optimizes network slicing for general services, and Slice to Learn (S2L), where slices support AI model training, often offloaded from Internet of Things (IoT) devices. Existing S2L approaches typically optimize communication or computation in isolation. This paper presents the first unified framework that jointly optimizes communication resources, computation capacity, and AI hyperparameters to maximize the average accuracy of multiple concurrent AI services. We address the complexity of this joint problem by applying L2S-inspired techniques to enhance S2L, introducing two autonomous agents: EXP3 from online convex optimization and DQN from deep reinforcement learning. Extensive experiments demonstrate and contrast the effectiveness of these agents in maximizing aggregated AI accuracy, supporting knowledge transfer, and sustaining robust performance under adversarial and long-term conditions, thereby enhancing the realization of zero-touch network management for AI services in 6G networks, supporting resource-constrained IoT.

Original languageEnglish
JournalIEEE Internet of Things Magazine
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • AI
  • DQN
  • EXP3
  • optimization
  • slicing
  • zero-touch

Fingerprint

Dive into the research topics of 'Toward AI-Native 6G: Unveiling Online Optimization and Deep Reinforcement Learning for Autonomous Network Slicing'. Together they form a unique fingerprint.

Cite this