Skip to content

Overview of self-supervised learning of tiny models, including distillation-based methods (aks. self-supervised distillation) and non-distillation methods.

Notifications You must be signed in to change notification settings

zhangyifei01/Awesome-Self-supervised-Learning-of-Tiny-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

Awesome-Self-supervised-Learning-of-Tiny-Models

小模型自监督学习

Note that this repository consists of Distillation-based methods (aka. Self-supervised Distillation) and Non-distillation methods.

[ Updating ...... ]

2022


  • Improving Self-Supervised Lightweight Model Learning via Hard-Aware Metric Distillation (SMD - ECCV22 Oral) [paper] [code]

    · · Author(s): Hao Liu, Mang Ye
    · · Organization(s): Wuhan University; Beijing Institute of Technology


  • DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning (DisCo - ECCV22 Oral) [paper] [code]

    · · Author(s): Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, Chunhua Shen
    · · Organization(s): Tencent Youtu Lab; Hong Kong University of Science and Technology; East China Normal University; Zhejiang University


  • Unsupervised Representation Learning for Binary Networks by Joint Classifier Learning (BURN - CVPR22 ) [paper] [code]

    · · Author(s): Dahyun Kim, Jonghyun Choi
    · · Organization(s): Upstage AI Research; NAVER AI Lab.; Yonsei University


  • Bag of Instances Aggregation Boosts Self-supervised Distillation (BINGO - ICLR22) [paper] [code]

    · · Author(s): Haohang Xu, Jiemin Fang, XIAOPENG ZHANG, Lingxi Xie, Xinggang Wang, Wenrui Dai, Hongkai Xiong, Qi Tian
    · · Organization(s): Shanghai Jiao Tong University; Huawei Inc.; Huazhong University of Science & Technology


  • Representation Distillation by Prototypical Contrastive Predictive Coding (ProtoCPC - ICLR22) [paper]

    · · Author(s): Kyungmin Lee
    · · Organization(s): Agency for Defense Development


  • Boosting Contrastive Learning with Relation Knowledge Distillation (ReKD - AAAI22) [paper]

    · · Author(s): Kai Zheng, Yuanjiang Wang, Ye Yuan
    · · Organization(s): Megvii Technology


  • On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals (**** - AAAI22) [paper] [code]

    · · Author(s): Haizhou Shi, Youcai Zhang, Siliang Tang, Wenjie Zhu, Yaqian Li, Yandong Guo, Yueting Zhuang
    · · Organization(s): OPPO Research Institute; Zhejiang University; New York University


  • Attention Distillation: self-supervised vision transformer students need more guidance (AttnDistill - BMVC22) [paper] [code]

    · · Author(s): Kai Wang, Fei Yang, Joost van de Weijer
    · · Organization(s): Universitat Autònoma de Barcelona


  • Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment (**** - BMVC22) [paper not released]

    · · Author(s): Yuchen Ma, Yanbei Chen, Zeynep Akata
    · · Organization(s): Heidelberg University; University of Tübingen


  • Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation (DLKD - TNNLS22) [paper] [code]

    · · Author(s): Fei Ding, Yin Yang, Hongxin Hu, Venkat Krovi, Feng Luo
    · · Organization(s): Clemson University; University at Buffalo The State University of New York


  • Pixel-Wise Contrastive Distillation (PCD - arXiv22) [paper]

    · · Author(s): Junqiang Huang, Zichao Guo
    · · Organization(s): Shopee


  • Effective Self-supervised Pre-training on Low-compute networks without Distillation (**** - arXiv22) [paper]

    · · Author(s): Fuwen Tan, Fatemeh Saleh, Brais Martinez
    · · Organization(s): Samsung AI Cambridge; Microsoft Research Cambridge


  • A Closer Look at Self-supervised Lightweight Vision Transformers (MAE-lite - arXiv22) [paper]

    · · Author(s): Shaoru Wang, Jin Gao, Zeming Li, Jian Sun, Weiming Hu
    · · Organization(s): Institute of Automation, Chinese Academy of Sciences; Megvii Technology; University of Chinese Academy of Sciences; CAS Center for Excellence in Brain Science and Intelligence Technology

2021


  • S^2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (S^2-BNN - CVPR21) [paper] [code]

    · · Author(s): Zhiqiang Shen, Zechun Liu, Jie Qin, Lei Huang, Kwang-Ting Cheng, Marios Savvides
    · · Organization(s): Carnegie Mellon University; Hong Kong University of Science and Technology; Inception Institute of Artificial Intelligence


  • Distill on the Go: Online Knowledge Distillation in Self-Supervised Learning (DoGo - CVPRW21) [paper]

    · · Author(s): Prashant Bhat, Elahe Arani, Bahram Zonooz
    · · Organization(s): Advanced Research Lab, NavInfo Europe, Eindhoven, The Netherlands


  • Unsupervised Representation Transfer for Small Networks: I Believe I Can Distill On-the-Fly (OSS - NeurIPS21) [paper]

    · · Author(s): Hee Min Choi, Hyoa Kang, Dokwan Oh
    · · Organization(s): Samsung Advanced Institute of Technology


  • SEED: Self-supervised Distillation For Visual Representation (SEED - ICLR21) [paper]

    · · Author(s): Zhiyuan Fang, Jianfeng Wang, Lijuan Wang, Lei Zhang, Yezhou Yang, Zicheng Liu
    · · Organization(s): Arizona State University; Microsoft Corporation


  • SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation (SimReg - BMVC21) [paper] [code]

    · · Author(s): K. L. Navaneet, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
    · · Organization(s): University of Maryland; University of California, Davis


  • ProtoSEED: Prototypical Self-SupervisedRepresentation Distillation (ProtoSEED - NeurIPSW21) [paper]

    · · Author(s): Kyungmin Lee
    · · Organization(s): Agency for Defense Development


  • Simple Distillation Baselines for Improving Small Self-supervised Models (SimDis - arXiv21) [paper] [code]

    · · Author(s): Jindong Gu, Wei Liu, Yonglong Tian
    · · Organization(s): University of Munich; Tencent; MIT


  • Self-Supervised Visual Representation Learning Using Lightweight Architectures (**** - arXiv21) [paper]

    · · Author(s): Prathamesh Sonawane, Sparsh Drolia, Saqib Shamsi, Bhargav Jain
    · · Organization(s): Pune Institute of Computer Technology; Whirlpool Corporation

2020


  • CompRess: Self-Supervised Learning by Compressing Representations (CompRess - NeurIPS20) [paper] [code]

    · · Author(s): Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
    · · Organization(s): University of Maryland

2018


  • Boosting Self-Supervised Learning via Knowledge Transfer ( **** - CVPR18) [paper]

    · · Author(s): Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, Hamed Pirsiavash
    · · Organization(s): University of Bern; University of Maryland, Baltimore County

Some Related Influential Repositories


  • awesome-self-supervised-learning (star 5.3k) [link]

  • Awesome-Knowledge-Distillation (star 2k) [link]

  • DeepClustering (star 2k) [link]

  • awesome-AutoML-and-Lightweight-Models (star 784) [link]

  • awesome_lightweight_networks (star 540) [link]

Thanks for the support of Prof. Yu Zhou.

About

Overview of self-supervised learning of tiny models, including distillation-based methods (aks. self-supervised distillation) and non-distillation methods.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published