You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
No.
Describe the solution you'd like
Please support internlm/internlm-xcomposer2-vl-7b-4bit model. They have already provided 4-bit quantized model, and there is sample code for running the model with AutoGPTQ: https://github.com/InternLM/InternLM-XComposer/tree/main#4-bit-model
Describe alternatives you've considered
no.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered:
This is a bit of a problem, since internlm modeltype is already supported in AutoGPTQ but they are using completely different layout to normal InternLM. It would be possible to use a custom modeltype name but that would require changing the config.
Might be able to bypass this with a bit hacky solution.
Is your feature request related to a problem? Please describe.
No.
Describe the solution you'd like
Please support
internlm/internlm-xcomposer2-vl-7b-4bit
model. They have already provided 4-bit quantized model, and there is sample code for running the model with AutoGPTQ: https://github.com/InternLM/InternLM-XComposer/tree/main#4-bit-modelDescribe alternatives you've considered
no.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: