Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model is still re-loading even though --highvram is turned on #3423

Open
vu0tran opened this issue May 7, 2024 · 0 comments
Open

Model is still re-loading even though --highvram is turned on #3423

vu0tran opened this issue May 7, 2024 · 0 comments

Comments

@vu0tran
Copy link

vu0tran commented May 7, 2024

I'm running my ComfyUI as a server. I'm using the exact same workflow, and the only thing that changes is the model. Even though I'm using --highvram, the model is still partially reloading when I swap checkpoints.

It adds about a 2-4 second overhead when context switching between models.

Is there a way to keep my models 100% in VRAM so there is no overhead to context switching?

Here's an example generation without switching the Model:

got prompt
100%|█████████████████████████████████████████████████████████| 5/5 [00:02<00:00,  2.42it/s]
Prompt executed in 2.79 seconds

When I change the model, I expect it to really quickly be able to swap from VRAM but I get:


model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
lora key not loaded: lora_te_text_model_encoder_layers_0_mlp_fc1.alpha
...
Requested to load SD1ClipModel
Loading 1 new model
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|█████████████████████████████████████████████████████████| 5/5 [00:02<00:00,  2.44it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 4.69 seconds
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant