DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data

Research output: Contribution to conferencePaperpeer-review

Abstract

Backdoor attacks are among the most effective, practical, and stealthy attacks in deep learning. In this paper, we consider a practical scenario where a developer obtains a deep model from a third party and uses it as part of a safety-critical system. The developer wants to inspect the model for potential backdoors prior to system deployment. We find that most existing detection techniques make assumptions that are not applicable to this scenario. In this paper, we present a novel framework for detecting backdoors under realistic restrictions. We generate candidate triggers by deductively searching over the space of possible triggers. We construct and optimize a smoothed version of Attack Success Rate as our search objective. Starting from a broad class of template attacks and just using the forward pass of a deep model, we reverse engineer the backdoor attack. We conduct extensive evaluation on a wide range of attacks, models, and datasets, with our technique performing almost perfectly across these settings.
Original languageEnglish
Number of pages20
Publication statusPublished - Oct 2025
Event34th USENIX Security Symposium
- Seattle, United States
Duration: 13 Aug 202515 Aug 2025

Conference

Conference34th USENIX Security Symposium
Country/TerritoryUnited States
CitySeattle
Period13/08/2515/08/25

Fingerprint

Dive into the research topics of 'DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data'. Together they form a unique fingerprint.

Cite this