DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Backdoor attacks are among the most effective, practical, and stealthy attacks in deep learning. In this paper, we consider a practical scenario where a developer obtains a deep model from a third party and uses it as part of a safety-critical system. The developer wants to inspect the model for potential backdoors prior to system deployment. We find that most existing detection techniques make assumptions that are not applicable to this scenario. In this paper, we present a novel framework for detecting backdoors under realistic restrictions. We generate candidate triggers by deductively searching over the space of possible triggers. We construct and optimize a smoothed version of Attack Success Rate as our search objective. Starting from a broad class of template attacks and just using the forward pass of a deep model, we reverse engineer the backdoor attack. We conduct extensive evaluation on a wide range of attacks, models, and datasets, with our technique performing almost perfectly across these settings.

Original languageEnglish
Title of host publicationProceedings of the 34th USENIX Security Symposium
PublisherUSENIX Association
Pages6419-6438
Number of pages20
ISBN (Electronic)9781939133526
Publication statusPublished - 2025
Event34th USENIX Security Symposium, USENIX Security 2025 - Seattle, United States
Duration: 13 Aug 202515 Aug 2025

Publication series

NameProceedings of the 34th USENIX Security Symposium

Conference

Conference34th USENIX Security Symposium, USENIX Security 2025
Country/TerritoryUnited States
CitySeattle
Period13/08/2515/08/25

Fingerprint

Dive into the research topics of 'DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data'. Together they form a unique fingerprint.

Cite this