Skip to content

an implementation of MoCo and MoCo-v2 improvements pre-trained on Imagenette

License

Notifications You must be signed in to change notification settings

AlexZaikin94/MoCo-v2

Repository files navigation

a PyTorch implementation of MoCo-v1 with MoCo-v2 improvements

Enviorment

to install requirements run:

conda env create -f environment.yml

Downloading Data

simply run:

python download_data.py

Unsupervised Pre-Training

Hyperparameters for recreating our results:

backbone batch size temperature queue size cos mlp aug+
ResNeXt-50-32x4d 32 0.2 16384

for recreating our results, adjust the config.py file according to our hyperparameters and on a single GPU machine, simply run:

python main_moco.py

or on a Slurm cluster, run:

sbatch -c 2 --gres=gpu:1 -o out_moco.out -J run_moco run_moco.sh

Linear Classification

Using the pre-trained model with frozen weights, we achieve 92.8% top-1 accuracy, using Linear classification on the Imagenette validation set.

for recreating our results, adjust the config.py file according to our hyperparameters and on a single GPU machine, simply run:

python main_clf.py

or on a Slurm cluster, run:

sbatch -c 2 --gres=gpu:1 -o out_clf.out -J run_clf run_clf.sh

Results

epochs time top-1 accuracy checkpoint
pre-training phase 600 97 hours 0.92367 download
linear classification phase 200 5 hours 0.92777 download

for evaluating the results on the Imagenette train and validation dataset, simply run (after downloading the data):

python evaluate.py

or on a Slurm cluster, run:

sbatch -c 2 --gres=gpu:1 -o out_evaluate.out -J run_evaluate run_evaluate.sh

training logs can be found in logs

Pre-Training loss

Pre-Training top-1 accuracy

linear classification loss

linear classification top-1 accuracy

Implementation details

  • our implementation only supports single-gpu training, so we do not implement batch-shuffle.
  • top-1 accuracy reported for training is an approximation, using the encodings (train and val, with no_gran and evaluation mode).
  • we run training in a few consecutive sessions on Nvidia GeForce GTX 1080 Ti/GeForce GTX 2080 Ti/Titan Xp.

About

an implementation of MoCo and MoCo-v2 improvements pre-trained on Imagenette

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published