The Adversarial Machine Learning Conundrum: Can the Insecurity of ML Become the Achilles' Heel of Cognitive Networks?

Muhammad Usama, Junaid Qadir, Ala Al-Fuqaha, Mounir Hamdi

Research output: Contribution to journalArticlepeer-review

17 Citations (Scopus)

Abstract

The holy grail of networking is to create cognitive networks that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of unsecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this article, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.

Original languageEnglish
Article number8884228
Pages (from-to)196-203
Number of pages8
JournalIEEE Network
Volume34
Issue number1
DOIs
Publication statusPublished - 1 Jan 2020

Fingerprint

Dive into the research topics of 'The Adversarial Machine Learning Conundrum: Can the Insecurity of ML Become the Achilles' Heel of Cognitive Networks?'. Together they form a unique fingerprint.

Cite this