New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference time #29
Comments
The inference time and the GPU memory usage have significantly exceeded expectations. You could try terminating unrelated processes and then give it another try. |
Thats odd, with a google A100 for motion-06 its peaking at 12.2GB 100% 20/20 [01:21<00:00, 4.05s/it] |
How did you get that? With an RTX 4090 I get much more VRAM usage than that number. |
You need a large system RAM too, I've just tried using the T4 on colab free tier but the system RAM maxed out at 12gb loading the motion module, maybe that can be sent to VRAM instead if you have high VRAM? Here is my config file using motion-06 num_inference_steps: 20 guidance_types:
noise_scheduler_kwargs: unet_additional_kwargs:
guidance_encoder_kwargs: enable_xformers_memory_efficient_attention: true |
Hi, I'm grateful for your excellent work! I've implemented the code as per the instructions, and it runs without errors. However, the inference time is slow, approximately 176 seconds per iteration. I tested it on an 80G A100 GPU, and it seems to be using around 71G of GPU memory. Is this normal?
The text was updated successfully, but these errors were encountered: