-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support unit32 in save_gguf #814
Comments
So I think you're goal is to export It is not quite so simple as enabling a |
I will leave this open as an enhancement to help prioritize when we can get to it. For the very short-term your best bet is exporting to fp16 (either safetensors or gguf) and then quantizing. |
I see, I thought that since we can load a quantized gguf format model without de-quantization in mlx, maybe we can directly convert mlx quant to gguf, which would be more beneficial than converting it to f16 gguf format and quantize it |
Quick note, not sure if uint32 is a supported type in gguf, only see signed ints on the list |
While working on converting the quant MLX model to GGUF format, I noticed that we do not support uint32 in save_gguf. This makes it difficult to convert the quant mode given that the weight of quant models are in uint32 format. I am wondering if there is any chance we would support uint32 format in the save_gguf method?
the code to replicate the issue:
The text was updated successfully, but these errors were encountered: