Skip to content

This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.

License

Notifications You must be signed in to change notification settings

ruoxi-jia-group/ASSET

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms

Python 3.6 Pytorch 1.10.1 CUDA 11.0

This repository is the official implementation of the Usenix Security 2023 paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." We find that existing detection methods cannot be applied or suffer limited performance for Self-Supervised Learning and transfer learning; even for the widely studied end-to-end supervised learning setting, there is still large room to improve detection in terms of their robustness to variations in poison ratio and attack designs.

To address this problem...actively introduce diffrent model behaviors...

Features

In the past, the detection of backdoor data was primarily researched within the framework of end-to-end supervised learning (SL). However, in recent years, the use of self-supervised learning (SSL) and transfer learning (TL) has become increasingly popular due to their reduced requirement for labeled data. It has also been shown that successful backdoor attacks can be carried out in these novel settings. Wepropose a new detection method called Active Separation via Offset (ASSET), which actively induces different model behaviors between the backdoor and clean samples to promote their separation. ASSET enables stable defense under different learning paradigms.

table

Requirements

  • Python >= 3.6
  • PyTorch >= 1.10.1
  • Torchvision >= 0.11.2
  • Imageio >= 2.9.0

Usage & HOW-TO

Use the ASSET_demo.ipynb notebook for a quick start of the ASSET defense (demonstrated on the CIFAR-10 dataset). The default setting running on the CIFAR-10 dataset and attack method is BadNets on ResNet-18.

Can you make it easier?

About

This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published