Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run SDXL Turbo at all. My 32GB of RAM at 100% and fails. #2079

Open
MikeStirner opened this issue Jan 27, 2024 · 2 comments
Open

Can't run SDXL Turbo at all. My 32GB of RAM at 100% and fails. #2079

MikeStirner opened this issue Jan 27, 2024 · 2 comments

Comments

@MikeStirner
Copy link

What is going wrong:
When I run SDXL Turbo it never works.
The RAM gets within 200 MB of being full (might even hit full?), which is odd. And that's when it fails. I normally only use 6 GB of my 32 GB of RAM.
I did originally install SHARK earlier today before realizing my Driver were too old and so I updated them and did the --clear_all flag.
I noticed this:
ERROR: Exception in ASGI application
But the other errors are unclear to me.

What I tried already:
I have done the --clear_all flag and get the same results.
Tried removing the hugging face files so they had to redownload and that didn't help.
Also tried running in command prompt.
Also tried running as admin.

OS: Windows 10

GPU: RX 6800

VRAM: 16 GB

GPU driver: Driver Version
23.30.13.03-231122a-397541C-AMD-Software-PRO-Edition

RAM: 32 GB

Log attached.
2024-01-27T08_42_11_063
Shark error10.txt

@MikeStirner
Copy link
Author

My SHARK filename is: nodai_shark_studio_20240109_1118.exe So that's the version I guess?

@MikeStirner
Copy link
Author

Tried some flags (from the "Target" field of the shortcut I made): C:\Installs\SHARK\nodai_shark_studio_20240109_1118.exe --vulkan_large_heap_block_size=0 --use_base_vae
No difference. So then, I used the dropdown in the app to change the default VAE to None.

That time is got farther than before but gave me a Memory error:

(left out above because it looked the exact same)
Looking into gs://shark_tank/SDXL/mlir/unet_1_77_512_512_fp16_sdxl-turbo.mlir
torch\fx\node.py:272: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph.
warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.")
saving unet_1_77_512_512_fp16_sdxl-turbo_vulkan_torch_linalg.mlir to .\shark_tmp
No vmfb found. Compiling and saving to C:\Installs\SHARK\unet_1_77_512_512_fp16_sdxl-turbo_vulkan.vmfb
Configuring for device:vulkan://00000000-0300-0000-0000-000000000000
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Exception in thread Thread-50 (_readerthread):
Traceback (most recent call last):
File "threading.py", line 1038, in _bootstrap_inner
File "threading.py", line 975, in run
File "subprocess.py", line 1552, in _readerthread
MemoryError

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant