TY - JOUR
T1 - Clustered Scheduling and Communication Pipelining for Efficient Resource Management of Wireless Federated Learning
AU - Kececi, Cihat
AU - Shaqfeh, Mohammad
AU - Al-Qahtani, Fawaz
AU - Ismail, Muhammad
AU - Serpedin, Erchin
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2023/8/1
Y1 - 2023/8/1
N2 - This article proposes using communication pipelining to enhance the convergence speed of federated learning in mobile edge computing applications. Due to limited wireless subchannels, a subset of the total clients is scheduled in each iteration of federated learning algorithms. On the other hand, the scheduled clients wait for the slowest client to finish its computation. We propose to first cluster the clients based on the time they need per iteration to compute the local gradients of the federated learning model. Then, we schedule a mixture of clients from all clusters to send their local updates in a pipelined manner. In this way, instead of just waiting for the slower clients to finish their computations, more clients can participate in each iteration. While the time duration of a single iteration does not change, the proposed method can significantly reduce the number of required iterations to achieve a target accuracy. We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution. We also provide numerical results to demonstrate the gains of the proposed method for different data sets and deep learning architectures.
AB - This article proposes using communication pipelining to enhance the convergence speed of federated learning in mobile edge computing applications. Due to limited wireless subchannels, a subset of the total clients is scheduled in each iteration of federated learning algorithms. On the other hand, the scheduled clients wait for the slowest client to finish its computation. We propose to first cluster the clients based on the time they need per iteration to compute the local gradients of the federated learning model. Then, we schedule a mixture of clients from all clusters to send their local updates in a pipelined manner. In this way, instead of just waiting for the slower clients to finish their computations, more clients can participate in each iteration. While the time duration of a single iteration does not change, the proposed method can significantly reduce the number of required iterations to achieve a target accuracy. We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution. We also provide numerical results to demonstrate the gains of the proposed method for different data sets and deep learning architectures.
KW - Clustered scheduling
KW - communication pipelining
KW - federated learning
KW - mobile edge computing
UR - https://www.scopus.com/pages/publications/85151542388
U2 - 10.1109/JIOT.2023.3262620
DO - 10.1109/JIOT.2023.3262620
M3 - Article
AN - SCOPUS:85151542388
SN - 2327-4662
VL - 10
SP - 13303
EP - 13316
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 15
ER -