-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.cuda.OutOfMemoryError with SDXL #2785
Comments
try again now |
Thanks for the quick response and for trying to resolve the issue, but unfortunately it still doesn't work right. Now I don't get an error message, but when I open the WebUI and try to generate an image it shows "In queue 1/1" for a moment and then just loads forever but without generating any image. I hope this helps. Thanks again for the support! |
Am receiving the same CUDA out of memory error on a RunPod with template RunPod Fast Stable Diffusion OutOfMemoryError: CUDA out of memory. Tried to allocate 9.41 GiB (GPU 0; 19.71 GiB total capacity; 16.71 GiB already allocated; 2.32 GiB free; 17.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
The out-of-memory error was resolved by reducing the "resize to" (img2img) image parameters to a smaller image size. None of the suggested methods of changing the max split size were effective, including: |
Hello,
I've set up my notebook on Paperspace as per the instructions in TheLastBen/PPS, aiming to run StableDiffusion XL on a P4000 GPU. However, when attempting to generate an image, I encounter a CUDA out of memory error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.00 MiB (GPU 0; 7.92 GiB total capacity; 6.79 GiB already allocated; 5.69 MiB free; 7.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I've followed all setup instructions to the letter and haven't deviated from the recommended settings. Despite the detailed error message, I'm unsure how to proceed to resolve this.
Has anyone encountered a similar issue or has suggestions what I should do?
I previously tried to:
Thank you very much!
The text was updated successfully, but these errors were encountered: