TY - GEN
T1 - A Deep Reinforcement Learning-Driven Optimization for STAR-RIS-empowered Cooperative RSMA
AU - Maghrebi, Youssef
AU - Elhattab, Mohamed
AU - Assi, Chadi
AU - Ghrayeb, Ali
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/11/20
Y1 - 2024/11/20
N2 - In this study, we explore the impact of employing an active Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surface (STAR-RIS) to aid a cooperative rate splitting downlink communication system. The system comprises a base station (BS) equipped with multiple antennas, utilizing the active STAR-RIS to cater to two distinct user categories: weak users experiencing poor channel conditions with the BS, and strong users that have better channel conditions, serving as full-duplex relays to transmit the common stream to weak users. We aim to optimize the BS beamformers, the active STARRIS reflection and transmission matrices, the common stream split, and the transmit relaying power of strong users jointly. This optimization seeks to maximize the communication sum rate while adhering to minimum rate constraints, active STAR-RIS hardware constraints, and specified power budgets at the active STAR-RIS, BS, and each strong user. Typically, this problem is solved by conventional optimization methods involving convex approximations, which exhibit high time complexity that can hinder the real-time requirements of modern wireless communication systems. To tackle this issue, we propose leveraging a deep reinforcement learning (DRL) framework to address this nonconvex optimization problem, aiming for improved time efficiency. Specifically, we advocate for the implementation of an actor-critic model, which offers a nuanced approach by concurrently evaluating actions and assigning values to state-action pairs. Finally, through simulations, we illustrate the performance of our proposed approach.
AB - In this study, we explore the impact of employing an active Simultaneously Transmitting and Reflecting Reconfigurable Intelligent Surface (STAR-RIS) to aid a cooperative rate splitting downlink communication system. The system comprises a base station (BS) equipped with multiple antennas, utilizing the active STAR-RIS to cater to two distinct user categories: weak users experiencing poor channel conditions with the BS, and strong users that have better channel conditions, serving as full-duplex relays to transmit the common stream to weak users. We aim to optimize the BS beamformers, the active STARRIS reflection and transmission matrices, the common stream split, and the transmit relaying power of strong users jointly. This optimization seeks to maximize the communication sum rate while adhering to minimum rate constraints, active STAR-RIS hardware constraints, and specified power budgets at the active STAR-RIS, BS, and each strong user. Typically, this problem is solved by conventional optimization methods involving convex approximations, which exhibit high time complexity that can hinder the real-time requirements of modern wireless communication systems. To tackle this issue, we propose leveraging a deep reinforcement learning (DRL) framework to address this nonconvex optimization problem, aiming for improved time efficiency. Specifically, we advocate for the implementation of an actor-critic model, which offers a nuanced approach by concurrently evaluating actions and assigning values to state-action pairs. Finally, through simulations, we illustrate the performance of our proposed approach.
UR - https://www.scopus.com/pages/publications/86000202562
U2 - 10.1109/MECOM61498.2024.10881479
DO - 10.1109/MECOM61498.2024.10881479
M3 - Conference contribution
AN - SCOPUS:86000202562
T3 - 2024 IEEE Middle East Conference on Communications and Networking, MECOM 2024
SP - 416
EP - 421
BT - 2024 IEEE Middle East Conference on Communications and Networking, MECOM 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Middle East Conference on Communications and Networking, MECOM 2024
Y2 - 17 November 2024 through 20 November 2024
ER -