TY - JOUR
T1 - Adversarial Machine Learning for Image-Based Radio Frequency Fingerprinting
T2 - Attacks and Defenses
AU - Papangelo, Lorenzo
AU - Pistilli, Maurizio
AU - Sciancalepore, Savio
AU - Oligeri, Gabriele
AU - Piro, Giuseppe
AU - Boggia, Gennaro
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2024/1/8
Y1 - 2024/1/8
N2 - Image-based radio frequency fingerprinting (RFF) is a promising variant of traditional RFF systems. As a distinctive feature, such systems convert physical-layer signals into matrices resembling 2-D or 3-D images and consider the latter as the input for state-of-the-art image classifiers. Compared to traditional ones, image-based RFF systems have recently shown enhanced flexibility for device identification as they can better mitigate channel conditions, devices movement, and power cycle. However, previous works have yet to investigate their performance when subject to adversarial machine learning (AML) attacks using state-of-the-art techniques such as generative adversarial networks and the fast gradient sign method. Similarly, there are no studies on their capability to integrate adversarial learning strategies for enhancing their robustness to such attacks. In this article, we fill the gap by conducting an experimental analysis of the effectiveness of AML attacks and adversarial training techniques for image-based RFF systems. Using a state-of-the-art image-based RFF system and actual measurements, we show that adversarial samples can effectively degrade classification performance. At the same time, training the image-based RFF system with adversarial samples increases the reliability and robustness of such methods at the cost of a lower classification accuracy.
AB - Image-based radio frequency fingerprinting (RFF) is a promising variant of traditional RFF systems. As a distinctive feature, such systems convert physical-layer signals into matrices resembling 2-D or 3-D images and consider the latter as the input for state-of-the-art image classifiers. Compared to traditional ones, image-based RFF systems have recently shown enhanced flexibility for device identification as they can better mitigate channel conditions, devices movement, and power cycle. However, previous works have yet to investigate their performance when subject to adversarial machine learning (AML) attacks using state-of-the-art techniques such as generative adversarial networks and the fast gradient sign method. Similarly, there are no studies on their capability to integrate adversarial learning strategies for enhancing their robustness to such attacks. In this article, we fill the gap by conducting an experimental analysis of the effectiveness of AML attacks and adversarial training techniques for image-based RFF systems. Using a state-of-the-art image-based RFF system and actual measurements, we show that adversarial samples can effectively degrade classification performance. At the same time, training the image-based RFF system with adversarial samples increases the reliability and robustness of such methods at the cost of a lower classification accuracy.
KW - Artificial neural networks
KW - Perturbation methods
KW - Radio frequency
KW - Receivers
KW - Robustness
KW - Symbols
KW - Training
UR - https://www.scopus.com/pages/publications/85182369775
U2 - 10.1109/MCOM.001.2300464
DO - 10.1109/MCOM.001.2300464
M3 - Article
AN - SCOPUS:85182369775
SN - 0163-6804
VL - 62
SP - 108
EP - 113
JO - IEEE Communications Magazine
JF - IEEE Communications Magazine
IS - 11
ER -