Skip to content
This repository has been archived by the owner on May 1, 2023. It is now read-only.

Higher than 8-bit Quantization not working properly!? #554

Open
Amin-Azar opened this issue Mar 11, 2021 · 0 comments
Open

Higher than 8-bit Quantization not working properly!? #554

Amin-Azar opened this issue Mar 11, 2021 · 0 comments

Comments

@Amin-Azar
Copy link

Thanks for this great framework!
I was wondering if there is an explicit 'no' or a limitation for quantizing weights and/or activations to higher than 8 bits using asymmetric methods? When I tried 16/32 for weights, on asymetric_s (similarly for activations) the accuracy drops to 0.2% while it should improve.

@Amin-Azar Amin-Azar changed the title Higher than 8-bit Quantization Higher than 8-bit Quantization not working properly!? Mar 11, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant