TY - JOUR
T1 - Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data
AU - Abdellatif, Alaa Awad
AU - Mhaisen, Naram
AU - Mohamed, Amr
AU - Erbad, Aiman
AU - Guizani, Mohsen
AU - Dawy, Zaher
AU - Nasreddine, Wassim
N1 - Publisher Copyright:
© 2021 The Author(s)
PY - 2022/3
Y1 - 2022/3
N2 - Federated Learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data. It is a promising solution for telemonitoring systems that demand intensive data collection, for detection, classification, and prediction of future events, from different locations while maintaining a strict privacy constraint. Due to privacy concerns and critical communication bottlenecks, it can become impractical to send the FL updated models to a centralized server. Thus, this paper studies the potential of hierarchical FL in Internet of Things (IoT) heterogeneous systems. In particular, we propose an optimized solution for user assignment and resource allocation over hierarchical FL architecture for IoT heterogeneous systems. This work focuses on a generic class of machine learning models that are trained using gradient-descent-based schemes while considering the practical constraints of non-uniformly distributed data across different users. We evaluate the proposed system using two real-world datasets, and we show that it outperforms state-of-the-art FL solutions. Specifically, our numerical results highlight the effectiveness of our approach and its ability to provide 4-6% increase in the classification accuracy, with respect to hierarchical FL schemes that consider distance-based user assignment. Furthermore, the proposed approach could significantly accelerate FL training and reduce communication overhead by providing 75-85% reduction in the communication rounds between edge nodes and the centralized server, for the same model accuracy. (C) 2021 The Author(s). Published by Elsevier B.V.
AB - Federated Learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data. It is a promising solution for telemonitoring systems that demand intensive data collection, for detection, classification, and prediction of future events, from different locations while maintaining a strict privacy constraint. Due to privacy concerns and critical communication bottlenecks, it can become impractical to send the FL updated models to a centralized server. Thus, this paper studies the potential of hierarchical FL in Internet of Things (IoT) heterogeneous systems. In particular, we propose an optimized solution for user assignment and resource allocation over hierarchical FL architecture for IoT heterogeneous systems. This work focuses on a generic class of machine learning models that are trained using gradient-descent-based schemes while considering the practical constraints of non-uniformly distributed data across different users. We evaluate the proposed system using two real-world datasets, and we show that it outperforms state-of-the-art FL solutions. Specifically, our numerical results highlight the effectiveness of our approach and its ability to provide 4-6% increase in the classification accuracy, with respect to hierarchical FL schemes that consider distance-based user assignment. Furthermore, the proposed approach could significantly accelerate FL training and reduce communication overhead by providing 75-85% reduction in the communication rounds between edge nodes and the centralized server, for the same model accuracy. (C) 2021 The Author(s). Published by Elsevier B.V.
KW - Distributed deep learning
KW - Edge computing
KW - Intelligent health systems
KW - Internet of Things (IoT)
KW - Non-IID data
UR - https://www.scopus.com/pages/publications/85118559787
U2 - 10.1016/j.future.2021.10.016
DO - 10.1016/j.future.2021.10.016
M3 - Article
AN - SCOPUS:85118559787
SN - 0167-739X
VL - 128
SP - 406
EP - 419
JO - Future Generation Computer Systems
JF - Future Generation Computer Systems
ER -