Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is FlexGen+GPTQ 4bit possible? #101

Open
BarfingLemurs opened this issue Mar 19, 2023 · 1 comment
Open

Is FlexGen+GPTQ 4bit possible? #101

BarfingLemurs opened this issue Mar 19, 2023 · 1 comment

Comments

@BarfingLemurs
Copy link

BarfingLemurs commented Mar 19, 2023

Just a curious question I suppose!
GPTQ 4bit - https://github.com/qwopqwop200/GPTQ-for-LLaMa
Suppose someone eventually finetunes 175B OPT model, with loras or regular finetunng. or perhaps the BLOOM or BLOOMZ model, would running inference be possible with GPTQ to allow the model to be run on 4gbvram and 50gb dram?

@Ying1123
Copy link
Collaborator

  1. FlexGen has support for 4-bit compression, see sec 5 in paper, and weights compression
    parser.add_argument("--compress-weight", action="store_true",
    cache compression
    parser.add_argument("--compress-cache", action="store_true",
  2. The compression in FlexGen has computation overhead, so it is not always better to turn it on. For large models like 175B which involves disk swap, it is usually better to turn on both weights and cache compression.
  3. GPTQ 4bit has not been implemented in FlexGen.
  4. Even you use 4bit, the weights of an 175B model need to occupy ~90G memory. 4GB vram and 50GB dram is not sufficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants