Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exl2 #4

Open
eramax opened this issue Dec 29, 2023 · 2 comments
Open

exl2 #4

eramax opened this issue Dec 29, 2023 · 2 comments

Comments

@eramax
Copy link

eramax commented Dec 29, 2023

using exl2 2.4 you can run mixtral on colab, did you give it a try ?

@dvmazur
Copy link
Owner

dvmazur commented Dec 30, 2023

Hey! We are currently looking into other quantization approaches, both to improve inference speed and LM quality. How good is exl2's 2.4 quantization? 2.4 bits per parameters sounds like it reduces perplexity quite a bit. Could you provide any links, so we can look into it?

@eramax
Copy link
Author

eramax commented Dec 30, 2023

@dvmazurm I made this example for you https://gist.github.com/eramax/b6fc0b472372037648df7f0019ab0e78
one note is colab T4 with 15 GB Vram is not enough for the context of Mixtral-8x7B if it was 16 GB it will work fine, since we need some vram for the context beside the model and the 2.4 model get loaded in about 14.7 GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@eramax @dvmazur and others