Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server execution error : could not allocate tensor with 52428800 bytes . There is not enough GPU video memory avaialble . Graphic Card - Asus rx6600 8gb vram , why is this happening? Any solution? #655

Closed
ishanjaiswal2610 opened this issue Apr 24, 2024 · 7 comments

Comments

@ishanjaiswal2610
Copy link

Screenshot 2024-04-24 180717

@ishanjaiswal2610 ishanjaiswal2610 changed the title server execution error : could not allocate tensor with 52428800 bytes . There is not enough GPU video memory avaialble . server execution error : could not allocate tensor with 52428800 bytes . There is not enough GPU video memory avaialble . Graphic Card - Asus rx6600 8gb vram , why is this happening? Any solution? Apr 24, 2024
@Acly
Copy link
Owner

Acly commented Apr 26, 2024

When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB...

I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\

@ishanjaiswal2610
Copy link
Author

When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB...

I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\

Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ? If yes plss help me how can i use his gpu for my work @Acly

@Sil3ntKn1ght
Copy link

When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB...

I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\

would this work for him? or anything like it..
and can we get a toggle in configuration to turn these on/off for those that get confused.
i image with a note requires restarting

to work around this we can edit the setting.json found in
C:\Users\PC\AppData\Roaming\krita\ai_diffusion

open settings.json with note pad,
add too "server_arguments": "--force-fp16" or "--force-fp32"
id recommend testing each to see whats faster for you.
please comment below your card and how these work for you.

its should look something like this (note i'm testing --normalvram feel free to test and feedback in comments)

{
"server_mode": "managed",
"server_path": "C:/Users/PC/AppData/Roaming/krita/pykrita/ai_diffusion/.server",
"server_url": "127.0.0.1:8188",
"server_backend": "cuda",
"server_arguments": "--force-fp16 --normalvram",
"selection_grow": 5,
"selection_feather": 5,
"selection_padding": 7,
"new_seed_after_apply": false,
"prompt_line_count": 2,
"show_negative_prompt": true,
"auto_preview": true,
"show_control_end": false,
"history_size": 1500,
"history_storage": 100,
"performance_preset": "low",
"batch_size": 2,
"resolution_multiplier": 1.0,
"max_pixel_count": 2,
"debug_dump_workflow": false
}

@Acly
Copy link
Owner

Acly commented Apr 29, 2024

add too "server_arguments": "--force-fp16" or "--force-fp32"

I don't think those do anything with DirectML. It always uses FP32 and doesn't support anything else.

@Acly
Copy link
Owner

Acly commented Apr 29, 2024

Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ?

Yes you can, but a bit of networking knowledge is required, I can't give you a complete walkthrough here. Maybe you can find guides on the internet.

Steps are roughly:

  1. Setup the server on your friends PC. Either install Krita+Plugin there and go through the installer, or copy the server folder from your PC might work. Or install ComfyUI+dependencies manually.
  2. Run the server from command line with the --listen argument
  3. If you want to connect to your friend's PC over the internet, you either need to do port forwarding (security risk, not recommended), or use a reverse proxy like nginx, or a VPN.

@Sil3ntKn1ght
Copy link

Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ?

Yes you can, but a bit of networking knowledge is required, I can't give you a complete walkthrough here. Maybe you can find guides on the internet.

Steps are roughly:

1. Setup the server on your friends PC. Either install Krita+Plugin there and go through the installer, or copy the server folder from your PC might work. Or install ComfyUI+dependencies manually.

2. Run the server from command line with the `--listen` argument

3. If you want to connect to your friend's PC over the internet, you either need to do port forwarding (security risk, _not_ recommended), or use a reverse proxy like [nginx](https://www.nginx.com/), or a VPN.

omg i need a video or guide so i can use this on my local network and use my lounge pc from my desk pc.

@Acly
Copy link
Owner

Acly commented May 14, 2024

It might also be possible to use AMD on Windows with ZLUDA. It should be much better performance and memory efficiency. There is a ComfyUI fork. But you will have to set it all up yourself and I can't say if everything is supported. It's not possible to test it without actually having AMD hardware.

@Acly Acly closed this as not planned Won't fix, can't repro, duplicate, stale May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants