Replies: 1 comment
-
We are literally unable to merge LoRA weights into a GPTQ quantized model because it needs calibration datasets to quantize the model. casper-hansen/AutoAWQ#85 (comment) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I noticed that merging LoRA weights into a quantized model is not supported. How would one go about adding this support? I can help if it's not too difficult to do. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions