TY - JOUR
T1 - The Adversarial Machine Learning Conundrum
T2 - Can the Insecurity of ML Become the Achilles' Heel of Cognitive Networks?
AU - Usama, Muhammad
AU - Qadir, Junaid
AU - Al-Fuqaha, Ala
AU - Hamdi, Mounir
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - The holy grail of networking is to create cognitive networks that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of unsecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this article, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
AB - The holy grail of networking is to create cognitive networks that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of unsecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this article, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
UR - https://www.scopus.com/pages/publications/85074528466
U2 - 10.1109/MNET.001.1900197
DO - 10.1109/MNET.001.1900197
M3 - Article
AN - SCOPUS:85074528466
SN - 0890-8044
VL - 34
SP - 196
EP - 203
JO - IEEE Network
JF - IEEE Network
IS - 1
M1 - 8884228
ER -