Skip to content
#

batch-normalization-fuse

Here is 1 public repository matching this topic...

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、reg…

  • Updated Oct 6, 2021
  • Python

Improve this page

Add a description, image, and links to the batch-normalization-fuse topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the batch-normalization-fuse topic, visit your repo's landing page and select "manage topics."

Learn more