Skip to content

falcon-xu/early-exit-papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Early Exiting

A curated list of early exiting.

Paper

CV

  1. Conditional deep learning for energy-efficient and enhanced pattern recognition. DATE 2016

    Priyadarshini Panda, Abhronil Sengupta, and Kaushik Roy. [pdf]

  2. BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks. ICPR 2016

    Surat Teerapittayanon, Bradley McDanel, and HT Kung. [pdf]

  3. Shallow-Deep Networks: Understanding and Mitigating Network Overthinking. ICML 2019

    Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitras. [pdf] [code]

  4. Improved Techniques for Training Adaptive Deep Networks. ICCV 2019

    Hao Li, Hong Zhang, Xiaojuan Qi, Ruigang Yang, Gao Huang. [pdf] [code]

  5. HAPI: Hardware-Aware Progressive Inference. ICCAD 2020

    S. Laskaridis et al. [pdf]

  6. Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing. IEEE TWC 2020

    E. Li et al. [pdf ]

  7. SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud. MobiCom 2020

    Stefanos Laskaridis et al. [pdf ]

  8. FlexDNN: Input-Adaptive On-Device Deep Learning for Efficient Mobile Vision. SEC 2020

    Biyi Fang et al. [pdf ]

  9. Dual Dynamic Inference: Enabling more efficient, adaptive, and controllable deep inference. IEEE Journal of Selected Topics in Signal Processing, 2020

    Yue Wang et al. [pdf]

  10. Learning to stop while learning to predict. ICML 2021

    Xinshi Chen, Hanjun Dai, Yu Li, Xin Gao, Le Song, Le Song [pdf] [code]

  11. Zero Time Waste: Recycling Predictions in Early Exit Neural Networks. NeurIPS 2021

    Maciej Wołczyk, Bartosz Wójcik, Klaudia Bałazy, Igor Podolak, Jacek Tabor, Marek Śmieja, Tomasz Trzciński [pdf] [code]

  12. Self-Distillation Towards Efficient and Compact Neural Networks. TPAMI 2021

    Linfeng Zhang , Chenglong Bao, Kaisheng Ma [pdf] [code]

  13. DeeCap: Dynamic Early Exiting for Efficient Image Captioning. CVPR 2022

    Zhengcong Fei, Xu Yan, Shuhui Wang , Qi Tian [pdf] [code]

  14. Multi-Exit Semantic Segmentation Networks. ECCV 2022

    Alexandros Kouris, Stylianos I. Venieris, Stefanos Laskaridis, and Nicholas D. Lane. [pdf]

  15. Meta-GF: Training Dynamic-Depth Neural Networks Harmoniously. ECCV 2022

    Yi Sun, Jian Li, Xin Xu. [pdf] [code]

  16. Single-layer vision transformers for more accurate early exits with less overhead. 2022 Neural Network. [pdf]

  17. Learning to Weight Samples for Dynamic Early-Exiting Networks. ECCV 2022

    Yizeng Han, Yifan Pu, Zihang Lai, Chaofei Wang, Shiji Song, Junfen Cao, Wenhui Huang, Chao Deng, Gao Huang [pdf] [code]

  18. ReX: An Efficient Approach to Reducing Memory Cost in Image Classification. AAAI 2022.

    Xuwei Qian, Renlong Hang, Qingshan Liu [pdf]

  19. You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model. CVPR 2023.

    Shengkun Tang, Yaqing Wang, Zhenglun Kong, Tianchi Zhang, Yao Li, Caiwen Ding, Yanzhi Wang, Yi Liang, Dongkuan Xu [pdf]

  20. Dynamic Perceiver for Efficient Visual Recognition. ICCV 2023.

    Yizeng Han, Dongchen Han, Zeyu Liu, Yulin Wang, Xuran Pan, Yifan Pu, Chao Deng, Junlan Feng, Shiji Song, Gao Huang [pdf] [code]

  21. HarvNet: Resource-Optimized Operation of Multi-Exit Deep Neural Networks on Energy Harvesting Devices. MobiSys 2023.

    Seunghyeok Jeon, Yonghun Choi , Yeonwoo Cho , and Hojung Cha [pdf]

  22. LGViT: Dynamic Early Exiting for Accelerating Vision Transformer. ACM MM 2023.

    Guanyu Xu, Jiawei Hao, Li Shen, Han Hu, Yong Luo, Hui Lin, Jialie Shen [pdf] [code]

  23. Boosted Dynamic Neural Networks. AAAI 2023.

    Haichao Yu, Haoxiang Li, Gang Hua, Gao Huang, Humphrey Shi [pdf] [code]

  24. Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models. ICCV 2023.

    Guoxuan Xia, Christos-Savvas Bouganis [pdf] [code]

NLP

Dynamic Methods

  1. DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. ACL 2020.

    Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. [pdf] [code]

  2. The Right Tool for the Job: Matching Model and Instance Complexities. ACL 2020.

    Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. [pdf]

  3. FastBERT: a Self-distilling BERT with Adaptive Inference Time. ACL 2020.

    Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. [pdf] [code]

  4. Early Exiting BERT for Efficient Document Ranking. ACL 2020.

    Ji Xin, Rodrigo Nogueira, Yaoliang Yu, Jimmy Lin. [pdf]

  5. BERT Loses Patience: Fast and Robust Inference with Early Exit. NeurIPS 2020.

    Wangchunshu Zhou, Canwen X u, Tao Ge, Julian McAuley, Ke Xu, Furu Wei. [pdf] [code]

  6. DynaBERT: Dynamic BERT with Adaptive Width and Depth. NeurlPS 2020.

    Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu. [pdf]

  7. A Global Past-Future Early Exit Method for Accelerating Inference of Pre-trained Language Models. NAACL 2021.

    Kaiyuan Liao, Yi Zhang, Xuancheng Ren, Qi Su, Xu Sun, Bin He. [pdf] [code]

  8. RomeBERT: Robust Training of Multi-Exit BERT. Preprint Jan 2021.

    Shijie Geng, Peng Gao, Zuohui Fu, Yongfeng Zhang. [pdf] [code]

  9. BERxiT: Early Exiting for BERT with Better Fine-Tuning and Extension to Regression. EACL 2021.

    Ji Xin, Raphael Tang, Yaoliang Yu, Jimmy Lin. [pdf] [code]

  10. Accelerating BERT Inference for Sequence Labeling via Early-Exit. ACL 2021.

    Xiaonan Li, Yunfan Shao, Tianxiang Sun, Hang Yan, Xipeng Qiu, Xuanjing Huang. [pdf]

  11. LeeBERT: Learned Early Exit for BERT with Cross-Level Optimization. ACL 2021.

    Wei Zhu. [pdf]

  12. TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference. ACL 2021.

    Deming Ye, Yankai Lin, Yufei Huang, Maosong Sun. [pdf]

  13. EBERT: Efficient BERT Inference with Dynamic Structured Pruning. ACL Findings2021.

    Zejian Liu, Fanrong Li, Gang Li, Jian Cheng. [pdf] [code]

  14. Early Exiting with Ensemble Internal Classifiers. Preprint May 2021.

    Tianxiang Sun, Yunhua Zhou, Xiangyang Liu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu. [pdf]

  15. ELBERT: Fast Albert with Confidence-Window Based Early Exit. ICASSP 2021.

    Keli Xie, Siyuan Lu, Meiqi Wang, Zhongfeng Wang. [pdf]

  16. CascadeBERT: Accelerating Inference of Pre-trained Language Models via Calibrated Complete Models Cascade. EMNLP 2021.

    Lei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, Xu Sun. [pdf] [code]

  17. Consistent Accelerated Inference via Confident Adaptive Transformers. EMNLP 2021.

    Tal Schuster, Adam Fisch, Tommi Jaakkola, Regina Barzilay. [pdf] [code]

  18. DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference. ACL NLP Power workshop 2022

    Cristóbal Eyzaguirre, Felipe del Río, Vladimir Araujo, Álvaro Soto. [pdf]

  19. Towards Efficient NLP: A Standard Evaluation and A Strong Baseline. NAACL 2022.

    Xiangyang Liu, Tianxiang Sun*, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu.* [pdf]

  20. PCEE-BERT: Accelerating BERT Inference via Patient and Confident Early Exiting, NAACL Findings 2022.

    Zhen Zhang, Wei Zhu, Jinfan Zhang, Peng Wang, Rize Jin, Tae-Sun Chung. [pdf] [code]

  21. E2CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning. IJCNN 2022 (WCCI 2022)

    Alperen Görmez, Venkat R. Dasari, Erdem Koyuncu. [pdf] [code]

  22. SkipBERT: Efficient Inference with Shallow Layer Skipping. ACL 2022

    Jue Wang, Ke Chen, Gang Chen, Lidan Shou, Julian McAuley [pdf] [project]

  23. Unsupervised Early Exit in DNNs with Multiple Exits. AI-ML systems 2022

    Hari Narayan N U, Manjesh K. Hanawal, Avinash Bhardwaj. [pdf] [code]

  24. Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings. ACL 2023.

    Daniel Rotem, Michael Hassid, Jonathan Mamou, Roy Schwartz. [pdf] [code]

  25. SmartBERT: A Promotion of Dynamic Early Exiting Mechanism for Accelerating. IJCAI 2023.

    Boren Hu, Yun Zhu, Jiacheng Li, Siliang Tang. [pdf]

  26. BADGE: Speeding Up BERT Inference after Deployment via Block-wise BypAsses and DiverGence-Based Early Exiting. ACL 2023.

    Wei Zhu, Peng Wang, Yuan Ni, Guotong Xie, Xiaoling Wang. [pdf]

Static Methods

  1. Depth-Adaptive Transformer. ICLR 2020.

    Maha Elbayad, Jiatao Gu, Edouard Grave, Michael Auli. [pdf]

  2. Reducing Transformer Depth on Demand with Structured Dropout, ICLR 2020.

    Angela Fan, Edouard Grave, Armand Joulin. [pdf] [code]

  3. Faster Depth-Adaptive Transformers. AAAI 2021.

    Yijin Liu, Fandong Meng, Jie Zhou, Yufeng Chen, Jinan Xu. [pdf]

  4. A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation. Findings of ACL 2022.

    Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, Xipeng Qiu [pdf] [code]

Survey

  1. Adaptive Inference through Early-Exit Networks: Design, Challenges and Directions. EMDL 2021.

    Stefanos Laskaridis, Alexandros Kouris, Nicholas D. Lane. [pdf]

  2. An Empirical Study on Adaptive Inference for Pretrained Language Model. TNNLS 2021.

    Weijie Liu , Xin Zhao, Zhe Zhao, Qi Ju, Xuefeng Yang, and Wei Lu.[pdf]

  3. Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges. ACM Computing Surveys 2022.

    Y Matsubara, M Levorato, F Restuccia. [pdf]

  4. Dynamic Neural Networks. A Survey.

    Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, Yulin Wang. [pdf]

  5. A Survey on Dynamic Neural Networks for Natural Language Processing. Journal of Mechanics of Continua and Mathematical Sciences 2022.

    Canwen Xu, Julian McAuley. [pdf]

Acknowledgments

This repository is built upon awesome-early-exiting. Thanks for the awesome project!