Skip to content
/ MixMIM Public

MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning

Notifications You must be signed in to change notification settings

Sense-X/MixMIM

Repository files navigation

Pytorch implementation of MixMAE (CVPR 2023)

tenser

This repo is the offcial implementation of the paper MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers

@article{MixMAE,
  author  = {Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li},
  journal = {arXiv:2205.13137},
  title   = {MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers},
  year    = {2022},
}

Availble pretrained models

Models Params (M) FLOPs (G) Pretrain Epochs Top-1 Acc. Pretrain_ckpt Finetune_ckpt
Swin-B/W14 88 16.3 600 85.1 base_600ep base_600ep_ft
Swin-B/W16-384x384 89.6 52.6 600 86.3 base_600ep base_600ep_ft_384x384
Swin-L/W14 197 35.9 600 85.9 large_600ep large_600ep_ft
Swin-L/W16-384x384 199 112 600 86.9 large_600ep large_600ep_ft_384x384

Training and evaluation

We use Slurm for multi-node distributed pretraining and finetuning.

Pretrain

sh exp/base_600ep/pretrain.sh partition 16 /path/to/imagenet
  • Training with 16 GPUs on your partition.
  • Batch size is 128 * 16 = 2048.
  • Default setting is to train for 600 epochs with mask ratio of 0.5.

Finetune

sh exp/base_600ep/finetune.sh partition 8 /path/to/imagenet
  • Training with 8 GPUs on your partition.
  • Batch size is 128 * 8 = 1024.
  • Default setting is to finetune for 100 epochs.

About

MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published