Abstract
Existing literature confirms the ability of machine learning to identify fraudulent smart grid power consumers who report false consumption readings to pay less electricity bills. Additionally, federated learning (FL) shows promise as a way to train the detection model without requiring data sharing, thereby safeguarding consumer privacy. However, malicious participants (i.e., clients) in FL training can launch adversarial attacks by training their local models with specially crafted low-consumption data to inject a Trojan into the global model. This Trojan can then be activated during the evaluation phase to evade the detection of false data. To the best of our knowledge, not enough research has been done on this topic in the context of unsupervised learning. The absence of labels in unsupervised learning exacerbates the effectiveness of Trojan attacks and renders it more challenging to design robust defense mechanisms. In this article, we first investigate the vulnerability of one-class classifiers to Trojan attacks. Then, we propose two defense approaches named layerwise close-to-median (LWCM) and Machine Unlearning to counter this attack. In LWCM, by choosing a FL client whose last layer model parameters are near to the median of all clients’ last layer parameters to update the global model, we can identify and exclude malicious updates. The idea is that the last layer parameters of honest clients should be similar, whereas those from malicious clients are different. With the majority of clients being honest, the median values are closer to the parameters of these clients, facilitating the detection of malicious clients. In Machine Unlearning, we utilize gradient ascent-based techniques to adapt models by selectively removing attacker-related data points. This is possible because honest clients generate data resembling that of malicious clients and employ a dual-component loss function to maintain model proficiency in recognizing benign power consumption patterns while eliminating malicious patterns. To show the seriousness of Trojan attacks and the effectiveness of our countermeasures, many experiments have been carried out.
| Original language | English |
|---|---|
| Article number | 10720069 |
| Pages (from-to) | 4006-4021 |
| Number of pages | 16 |
| Journal | IEEE Internet of Things Journal |
| Volume | 12 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - 15 Feb 2025 |
Keywords
- And smart grid (SG) automatic metering infrastructure
- Computational modeling
- Data models
- Data privacy
- Detectors
- Electricity
- One-class classifiers
- Security
- Smart grids
- Supervised learning
- Training
- Trojan attacks
- Trojan horses
- Unsupervised learning
- federated learning (FL)