New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server execution error : could not allocate tensor with 52428800 bytes . There is not enough GPU video memory avaialble . Graphic Card - Asus rx6600 8gb vram , why is this happening? Any solution? #655
Comments
When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB... I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\ |
Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ? If yes plss help me how can i use his gpu for my work @Acly |
would this work for him? or anything like it.. to work around this we can edit the setting.json found in open settings.json with note pad, its should look something like this (note i'm testing --normalvram feel free to test and feedback in comments) { |
I don't think those do anything with DirectML. It always uses FP32 and doesn't support anything else. |
Yes you can, but a bit of networking knowledge is required, I can't give you a complete walkthrough here. Maybe you can find guides on the internet. Steps are roughly:
|
omg i need a video or guide so i can use this on my local network and use my lounge pc from my desk pc. |
It might also be possible to use AMD on Windows with ZLUDA. It should be much better performance and memory efficiency. There is a ComfyUI fork. But you will have to set it all up yourself and I can't say if everything is supported. It's not possible to test it without actually having AMD hardware. |
The text was updated successfully, but these errors were encountered: