(Translated by https://www.hiragana.jp/)
GitHub - hila-chefer/RobustViT: [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows to finetune the explainability maps of Vision Transformers to enhance robustness.
Skip to content

[NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows to finetune the explainability maps of Vision Transformers to enhance robustness.

Notifications You must be signed in to change notification settings

hila-chefer/RobustViT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness [NeurIPS 2022]

This code allows to finetune the explainability maps of Vision Transformers to enhance robustness.

HuggingFace space + Colab notebook to run examples of the finetuned vs the original models:

Open In ColabHugging Face Spaces Open In YouTube

Updates:

06/05/2022 Added a HuggingFace Spaces demo:

Method overview:

The method employs loss functions directly to the explainability maps to ensure that the model is focused mostly on the foreground of the image:

Using a short finetuning process with only 3 labeled examples from 500 classes, our method improves robustness of ViT models across different model sizes and training techniques, even when data augmentations/ regularization are applied.

Model zoo

Below are links to download finetuned models for the base models of ViT AugReg (this is also the model that appears on timm), vanilla ViT, and DeiT. These are also the weights used in our colab notebook.

Path Description
AugReg-B Finetuned ViT Augreg base model.
ViT-B Finetuned vanilla ViT base model.
DeiT-B Finetuned DeiT base model.

Requirements

  • pytorch==1.7.1
  • torchvision==0.8.2
  • timm==0.4.12

Producing Segmentation Data

Using ImageNet-S

To use the ImageNet-S labeled data, download the ImageNetS919 dataset

Using TokenCut for unsupervised segmentation

  1. Clone the TokenCut project
    git clone https://github.com/YangtaoWANG95/TokenCut.git
    
  2. Install the dependencies Python 3.7, PyTorch 1.7.1, and CUDA 11.2. Please refer to the official installation. If CUDA 10.2 has been properly installed:
    pip install torch==1.7.1 torchvision==0.8.2
    
    Followed by:
    pip install -r TokenCut/requirements.txt
    
    
  3. Use the following command to extract the segmentation maps:
    python tokencut_generate_segmentation.py --img_path <PATH_TO_IMAGE> --out_dir <PATH_TO_OUTPUT_DIRECTORY>    
    

Finetuning ViT models

To finetune a pretrained ViT model use the imagenet_finetune.py script. Notice to uncomment the import line containing the pretrained model you wish to finetune.

Usage example:

python imagenet_finetune.py --seg_data <PATH_TO_SEGMENTATION_DATA> --data <PATH_TO_IMAGENET> --gpu 0  --lr <LR> --lambda_seg <SEG> --lambda_acc <ACC> --lambda_background <BACK> --lambda_foreground <FORE>

Notes:

  • For all models we use :
    • lambda_seg=0.8
    • lambda_acc=0.2
    • lambda_background=2
    • lambda_foreground=0.3
  • For DeiT models, a temperature is required as follows:
    • temperature=0.65 for DeiT-B
    • temperature=0.55 for DeiT-S
  • The learning rates per model are:
    • ViT-B: 3e-6
    • ViT-L: 9e-7
    • AR-S: 2e-6
    • AR-B: 6e-7
    • AR-L: 9e-7
    • DeiT-S: 1e-6
    • DeiT-B: 8e-7

Baseline methods

Notice to uncomment the import line containing the pretrained model you wish to finetune in the code.

GradMask

Run the following command:

python imagenet_finetune_gradmask.py --seg_data <PATH_TO_SEGMENTATION_DATA> --data <PATH_TO_IMAGENET> --gpu 0  --lr <LR> --lambda_seg <SEG> --lambda_acc <ACC>

All hyperparameters for the different models can be found in section D of the supplementary material.

Right for the Right Reasons

Run the following command:

python imagenet_finetune_rrr.py --seg_data <PATH_TO_SEGMENTATION_DATA> --data <PATH_TO_IMAGENET> --gpu 0  --lr <LR> --lambda_seg <SEG> --lambda_acc <ACC>

All hyperparameters for the different models can be found in section D of the supplementary material.

Evaluation

Robustness Evaluation

  1. Download the evaluation datasets:

  2. Run the following script to evaluate:

python imagenet_eval_robustness.py --data <PATH_TO_ROBUSTNESS_DATASET> --batch-size <BATCH_SIZE> --evaluate --checkpoint <PATH_TO_FINETUNED_CHECKPOINT>
  • Notice to uncomment the import line containing the pretrained model you wish to evaluate in the code.
  • To evaluate the original model simply omit the checkpoint parameter.
  • For the INet-v2 dataset add --isV2.
  • For the ObjectNet dataset add --isObjectNet.
  • For the SI datasets add --isSI.

Segmentation Evaluation

Our segmentation tests are based on the test in the official implementation of Transformer Interpretability Beyond Attention Visualization.

  1. Download the ImageNet segmentation test set.
  2. Run the following script to evaluate:
PYTHONPATH=./:$PYTHONPATH python SegmentationTest/imagenet_seg_eval.py  --imagenet-seg-path <PATH_TO_gtsegs_ijcv.mat>
  • Notice to uncomment the import line containing the pretrained model you wish to evaluate in the code.

Credits

We would like to sincerely thank the authors for their great works.

Citing our paper

If you make use of our work, please cite our paper:

@inproceedings{
chefer2022optimizing,
title={Optimizing Relevance Maps of Vision Transformers Improves Robustness},
author={Hila Chefer and Idan Schwartz and Lior Wolf},
booktitle={Thirty-Sixth Conference on Neural Information Processing Systems},
year={2022},
url={https://openreview.net/forum?id=upuYKQiyxa_}
}

About

[NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows to finetune the explainability maps of Vision Transformers to enhance robustness.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published