Code for "Language Model Knowledge Distillation for Efficient Question Answering in Spanish" (ICLR 2024 Tiny Papers)
-
Updated
Dec 5, 2023 - Python
Code for "Language Model Knowledge Distillation for Efficient Question Answering in Spanish" (ICLR 2024 Tiny Papers)
Denoising Diffusion Step-aware Models (ICLR2024)
[ECCV 2020 Oral] MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution
Official Website for the Workshop on Advancing Neural Networks Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024, WANT@NeurIPS 2023)
Recent Advances on Efficient Vision Transformers
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
This repository is for reproducing the results shown in the NNCodec ICML Workshop paper. Additionally, it includes a demo, prepared for the Neural Compression Workshop (NCW).
[ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.
Code repository of the paper "Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups" https://proceedings.mlr.press/v162/knigge22a.html
Official PyTorch training code of Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity (ICCV2023-RCV)
A generic code base for neural network pruning, especially for pruning at initialization.
[ICLR 2022] "Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently", by Xiaohan Chen, Jason Zhang and Zhangyang Wang.
[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
[ICLR'23] Trainability Preserving Neural Pruning (PyTorch)
Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)
[CVPR 2024] Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis
Official implementation of "EAGLES: Efficient Accelerated 3D Gaussians with Lightweight EncodingS"
[ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu
[IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.
[NeurIPS 2019 Google MicroNet Challenge] MSUNet is an efficient model that won the 4th place in the Google MicroNet Challenge CIFAR-100 Track hosted at NeurIPS 2019 designed by Yu Zheng, Shen Yan, Mi Zhang
Add a description, image, and links to the efficient-deep-learning topic page so that developers can more easily learn about it.
To associate your repository with the efficient-deep-learning topic, visit your repo's landing page and select "manage topics."