A tutorial on how to prune the embedding layer of a language model and crafting a suitable tokenizer
-
Updated
Sep 19, 2023 - Jupyter Notebook
A tutorial on how to prune the embedding layer of a language model and crafting a suitable tokenizer
This repository contains the Romanian version of DistilBERT.
A template for use in creating Autodistill Target Model packages.
This is a fork of the distilling-step-by-step repository with the aim of creating a task-specific LLM distillation framework for healthcare.
summer internship project @ JetBrains Research
Efficient Inference techniques implemented in PyTorch for computer vision.
Distillation examples. Trying to make Speaker Recognition Faster through different Model Compression techniques
Distillation of GANs with fairness constraints
Prompt engineering for developers
Alternus Vera Project
Learn about making a smaller network as good as a big ensemble model that can accelarate inference time.
A PyTorch-based knowledge distillation toolkit for natural language processing
This is an implementation for paper Automated training of location-specific edge models for traffic counting
An entrance test for a Computer Vision / NLP researcher job
A Series on Optimizing Transformer-Based Models
Optimising train, inference and throughput of expensive ML models
【NCA】Learning Metric Space with Distillation for Large-Scale Multi-Label Text Classification
DINOv1 implementation in Pytorch
Deep Mutual Learning in PaddlePaddle
Add a description, image, and links to the distillation topic page so that developers can more easily learn about it.
To associate your repository with the distillation topic, visit your repo's landing page and select "manage topics."