Skip to content
/ SGA Public

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]

License

Notifications You must be signed in to change notification settings

Zoky-2020/SGA

Repository files navigation

Set-level Guidance Attack

The official repository for Set-level Guidance Attack (SGA).
ICCV 2023 Oral Paper: Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models (https://arxiv.org/abs/2307.14061)

Please feel free to contact wangzq_2021@outlook.com if you have any question.

Brief Introduction

Vision-language pre-training (VLP) models have shown vulnerability to adversarial attacks. However, existing works mainly focus on the adversarial robustness of VLP models in the white-box settings. In this work, we inverstige the robustness of VLP models in the black-box setting from the perspective of adversarial transferability. We propose Set-level Guidance Attack (SGA), which can generate highly transferable adversarial examples aimed for VLP models.

Quick Start

1. Install dependencies

See in requirements.txt.

2. Prepare datasets and models

Download the datasets, Flickr30k and MSCOCO (the annotations is provided in ./data_annotation/). Set the root path of the dataset in ./configs/Retrieval_flickr.yaml, image_root.
The checkpoints of the fine-tuned VLP models is accessible in ALBEF, TCL, CLIP.

3. Attack evaluation

From ALBEF to TCL on the Flickr30k dataset:

python eval_albef2tcl_flickr.py --config ./configs/Retrieval_flickr.yaml \
--source_model ALBEF  --source_ckpt ./checkpoint/albef_retrieval_flickr.pth \
--target_model TCL --target_ckpt ./checkpoint/tcl_retrieval_flickr.pth \
--original_rank_index ./std_eval_idx/flickr30k/ --scales 0.5,0.75,1.25,1.5

From ALBEF to CLIPViT on the Flickr30k dataset:

python eval_albef2clip-vit_flickr.py --config ./configs/Retrieval_flickr.yaml \
--source_model ALBEF  --source_ckpt ./checkpoint/albef_retrieval_flickr.pth \
--target_model ViT-B/16 --original_rank_index ./std_eval_idx/flickr30k/ \
--scales 0.5,0.75,1.25,1.5

From CLIPViT to ALBEF on the Flickr30k dataset:

python eval_clip-vit2albef_flickr.py --config ./configs/Retrieval_flickr.yaml \
--source_model ViT-B/16  --target_model ALBEF \
--target_ckpt ./checkpoint/albef_retrieval_flickr.pth \
--original_rank_index ./std_eval_idx/flickr30k/ --scales 0.5,0.75,1.25,1.5

From CLIPViT to CLIPCNN on the Flickr30k dataset:

python eval_clip-vit2clip-cnn_flickr.py --config ./configs/Retrieval_flickr.yaml \
--source_model ViT-B/16  --target_model RN101 \
--original_rank_index ./std_eval_idx/flickr30k/ --scales 0.5,0.75,1.25,1.5

Transferability Evaluation

Existing adversarial attacks for VLP models cannot generate highly transferable adversarial examples.
(Note: Sep-Attack indicates the simple combination of two unimodal adversarial attacks: PGD and BERT-Attack)

AttackALBEF*TCLCLIPViTCLIPCNN
TR R@1*IR R@1*TR R@1IR R@1TR R@1IR R@1TR R@1IR R@1
Sep-Attack65.6973.9517.6032.9531.1745.2332.8245.49
Sep-Attack + MI58.8165.2516.0228.1923.0736.9826.5639.31
Sep-Attack + DIM56.4164.2416.7529.5524.1737.6025.5438.77
Sep-Attack + PNA_PO40.5653.9518.4430.9822.3337.0226.9538.63
Co-Attack77.1683.8615.2129.4923.6036.4825.1238.89
Co-Attack + MI64.8675.2625.4038.6924.9137.1126.3138.97
Co-Attack + DIM47.0362.2822.2335.4525.6438.5026.9540.58
SGA97.2497.2845.4255.2533.3844.1634.9346.57

The performance of SGA on four VLP models (ALBEF, TCL, CLIPViT and CLIPCNN), the Flickr30k dataset.

SourceAttackALBEFTCLCLIPViTCLIPCNN
TR R@1IR R@1TR R@1IR R@1TR R@1IR R@1TR R@1IR R@1
ALBEFPGD52.45*58.65*3.066.798.9613.2110.3414.65
BERT-Attack11.57*27.46*12.6428.0729.3343.1732.6946.11
Sep-Attack65.69*73.95*17.6032.9531.1745.2332.8245.49
Co-Attack77.16*83.86*15.2129.4923.6036.4825.1238.89
SGA97.24±0.22*97.28±0.15*45.42±0.6055.25±0.0633.38±0.3544.16±0.2534.93±0.9946.57±0.13
TCLPGD6.1510.7877.87*79.48*7.4813.7210.3415.33
BERT-Attack11.8926.8214.54*29.17*29.6944.4933.4646.07
Sep-Attack20.1336.4884.72*86.07*31.2944.6533.3345.80
Co-Attack23.1540.0477.94*85.59*27.8541.1930.7444.11
SGA48.91±0.7460.34±0.1098.37±0.08*98.81±0.07*33.87±0.1844.88±0.5437.74±0.2748.30±0.34
CLIPViTPGD2.504.934.858.1770.92*78.61*5.368.44
BERT-Attack9.5922.6411.8025.0728.34*39.08*30.4037.43
Sep-Attack9.5923.2511.3825.6079.75*86.79*30.7839.76
Co-Attack10.5724.3311.9426.6993.25*95.86*32.5241.82
SGA13.40±0.0727.22±0.0616.23±0.4530.76±0.0799.08±0.08*98.94±0.00*38.76±0.2747.79±0.58
CLIPCNNPGD2.094.824.007.811.106.6086.46*92.25*
BERT-Attack8.8623.2712.3325.4827.1237.4430.40*40.10*
Sep-Attack8.5523.4112.6426.1228.3439.4391.44*95.44*
Co-Attack8.7923.7413.1026.0728.7940.0394.76*96.89*
SGA11.42±0.0724.80±0.2814.91±0.0828.82±0.1131.24±0.4242.12±0.1199.24±0.18*99.49±0.05*

Visualization

Citation

Kindly include a reference to this paper in your publications if it helps your research:

@misc{lu2023setlevel,
    title={Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models},
    author={Dong Lu and Zhiqiang Wang and Teng Wang and Weili Guan and Hongchang Gao and Feng Zheng},
    year={2023},
    eprint={2307.14061},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

About

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages