TY - JOUR
T1 - Lyapunov-Guided Long-Term Fairness-Aware Federated Learning for Collaborative TinyML on Edge Devices
AU - Lu, Jianfeng
AU - Sheng, Yuhang
AU - Cao, Shuqin
AU - Elnaffar, Said
AU - Muhammad Saad, Malik
AU - Mohammed Seid, Abegaz
AU - Erbad, Aiman
N1 - Publisher Copyright:
© 1975-2011 IEEE.
PY - 2024/5/7
Y1 - 2024/5/7
N2 - Although federated learning (FL) has become a privacy-preserving machine learning paradigm that enables a collaborative form of tiny machine learning (TinyML) on edge devices, unfairness may arise when the performance of the global model varies due to heterogeneous devices and data. Existing works mainly focus on improving fairness in a single time slot, often ignoring the temporal coupling of FL in TinyML. To tackle this issue, we introduce a novel long-term fairness-aware model aggregation mechanism, named FedLV, which aims to reduce the accuracy distribution variance by considering the FL process as a whole. Specifically, we introduce the long-term fairness criterion as well as the long-term fairness problem into the design of FedLV for FL. To promote the global model's performance, we quantify the long-term guarantee of clients' contributions, and transform the long-term fairness problem into a queue stability problem via Lyapunov optimization. Since the global model's accuracy distribution is unmeasurable before model aggregation, we further propose a prior estimation solver to derive an approximately optimal solution and provide theoretical proof of the solver's convergence. Extensive experiments conducted on four real-world datasets demonstrate that the accuracy distribution variance of FedLV is at least 12% lower than that of both FedAvg and q-FFL.
AB - Although federated learning (FL) has become a privacy-preserving machine learning paradigm that enables a collaborative form of tiny machine learning (TinyML) on edge devices, unfairness may arise when the performance of the global model varies due to heterogeneous devices and data. Existing works mainly focus on improving fairness in a single time slot, often ignoring the temporal coupling of FL in TinyML. To tackle this issue, we introduce a novel long-term fairness-aware model aggregation mechanism, named FedLV, which aims to reduce the accuracy distribution variance by considering the FL process as a whole. Specifically, we introduce the long-term fairness criterion as well as the long-term fairness problem into the design of FedLV for FL. To promote the global model's performance, we quantify the long-term guarantee of clients' contributions, and transform the long-term fairness problem into a queue stability problem via Lyapunov optimization. Since the global model's accuracy distribution is unmeasurable before model aggregation, we further propose a prior estimation solver to derive an approximately optimal solution and provide theoretical proof of the solver's convergence. Extensive experiments conducted on four real-world datasets demonstrate that the accuracy distribution variance of FedLV is at least 12% lower than that of both FedAvg and q-FFL.
KW - Federated learning
KW - Lyapunov optimization
KW - long-term fairness
KW - tiny machine learning
UR - https://www.scopus.com/pages/publications/85192987800
U2 - 10.1109/TCE.2024.3397863
DO - 10.1109/TCE.2024.3397863
M3 - Article
AN - SCOPUS:85192987800
SN - 0098-3063
VL - 70
SP - 7334
EP - 7345
JO - IEEE Transactions on Consumer Electronics
JF - IEEE Transactions on Consumer Electronics
IS - 4
ER -