Skip to content
This repository has been archived by the owner on May 1, 2023. It is now read-only.

Releases: IntelLabs/distiller

Intermediate release

01 Apr 15:00
Compare
Choose a tag to compare
Intermediate release Pre-release
Pre-release

Tagging the 'master' branch before performing a few API-breaking changes.

v0.3.0

28 Feb 12:11
Compare
Choose a tag to compare
v0.3.0 Pre-release
Pre-release
  • Supports PyTorch 1.0.1
  • Supports installation as a Python package
  • Many features (TBD) since release v0.2.0 (PyTorch 0.4)

PyTorch 0.4 support and new features

25 Jun 12:12
Compare
Choose a tag to compare
Pre-release
  • PyTorch 0.4 support
  • An implementation of Baidu's RNN pruning paper from ICLR 2017
    Narang, Sharan & Diamos, Gregory & Sengupta, Shubho & Elsen, Erich. (2017).
    Exploring Sparsity in Recurrent Neural Networks. (https://arxiv.org/abs/1704.05119)
  • Add a word language model pruning example using AGP and Baidu RNN pruning
  • Quantization aware training (4-bit quantization)
  • New models: pre-activation ResNet for ImageNet and CIFAR, and AlexNet with batch-norm
  • New quantization documentation content

Initial version

16 May 09:21
Compare
Choose a tag to compare
Initial version Pre-release
Pre-release

We're tagging this version which uses PyTorch 0.3, and we want to want to move the 'master' branch to support PyTorch 0.4 and its API changes.