Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMD GPUs #63

Open
flying-sheep opened this issue Sep 15, 2022 · 25 comments
Open

AMD GPUs #63

flying-sheep opened this issue Sep 15, 2022 · 25 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@flying-sheep
Copy link

Describe the bug

I have a AMD Radeon RX 6800 XT. Stable diffusion supports this GPU.

After building this image, it fails to run:

 => => naming to docker.io/library/webui-docker-automatic1111                                                                                                                                                0.0s
[+] Running 1/1
 ⠿ Container webui-docker-automatic1111-1  Created                                                                                                                                                           0.2s
Attaching to webui-docker-automatic1111-1
Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]

Steps to Reproduce

  1. Run docker compose --profile auto up --build (after download)

Hardware / Software:

  • OS: Arch Linux (up-to-date)
  • GPU: AMD Radeon RX 6800 XT
  • Version 1.0.1
@flying-sheep flying-sheep added the bug Something isn't working label Sep 15, 2022
@AbdBarho
Copy link
Owner

@flying-sheep Unfortunately, AMD GPUs are not currently supported.
I know it that the auto fork can run on AMD GPU https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs, but I don't have any to test it.

If you would like to contribute, that would be great!

@AbdBarho AbdBarho added enhancement New feature or request and removed bug Something isn't working labels Sep 15, 2022
@flying-sheep
Copy link
Author

flying-sheep commented Sep 15, 2022

This docker-compose file seems to support passing AMD GPUs to docker: https://github.com/compscidr/lolminer-docker/blob/main/docker-compose.yml

But I don’t know what’s necessary software wise. Making just the device change, I get:

webui-docker-automatic1111-1  | txt2img: 
webui-docker-automatic1111-1  | /opt/conda/lib/python3.8/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling

@flying-sheep
Copy link
Author

flying-sheep commented Sep 15, 2022

Ah, seems like PyTorch needs to be installed via pip to get ROCm support. but it’s unclear to me if that means that it somehow detects the GPU while building, because if the built PyTorch package is capable of being run by both CUDA and ROCm, there’s no reason to not distribute that via anaconda, right?

@AbdBarho
Copy link
Owner

You are asking difficult questions my friend.

@flying-sheep
Copy link
Author

Welp, apparently nvidia has pressed enough people into their monopoly that I’m the first one 😧

@AbdBarho AbdBarho added the help wanted Extra attention is needed label Sep 16, 2022
@JoeMojoJones
Copy link

Have a look at : https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs#running-inside-docker

You need to passthrough the GPU into the docker container for ROCM to use it.

@AbdBarho
Copy link
Owner

AbdBarho commented Oct 1, 2022

@JoeMojoJones thank you, this link is helpful for reference.

The problem is I have no AMD GPU so I can't even test if the code works.

@GBora
Copy link

GBora commented Nov 4, 2022

@AbdBarho I have Pytorch installed via pip on my machine, what do I need to modify in the docker file to get AMD working? Maybe if it works I can do a PR for this?

@AbdBarho
Copy link
Owner

AbdBarho commented Nov 5, 2022

@GBora that's great! unfortunately, I have no experience of working with AMD GPUs and docker for deep learning. Maybe this link above could help guide you.

I would guess the changes would probably be related to the base image and the deploy config in docker compose, but this is just a guess.

@NazarYermolenko
Copy link

lem is I have no AM

Please perform changes to the docker-compose file, and then let me know, I'll pull changes and try to run and answer you if everything is correct :) At this moment invoke doesn't returns the issue in the disscussion. I have RX 6600, will try to run it.

@mtthw-meyer
Copy link

I got it working pretty easily for AMD

https://github.com/AbdBarho/stable-diffusion-webui-docker/pull/362/files

@flying-sheep
Copy link
Author

Awesome, your branch works nicely indeed!

Finally a way to use the potential of GPU lol.

@svupper
Copy link
Contributor

svupper commented Mar 30, 2023

hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c

@svupper
Copy link
Contributor

svupper commented Mar 30, 2023

Ok :) I just needed to execute this :

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey |
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list |
sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c

@f1am3d
Copy link

f1am3d commented Aug 28, 2023

@flying-sheep Was it merged to master?

@flying-sheep
Copy link
Author

No, doesn’t look like it: #362

I just checked it out locally and ran it.

@tgm4883
Copy link

tgm4883 commented Sep 16, 2023

@mtthw-meyer Does your fork still work? I'm trying to get that up but it complains "Found no NVIDIA driver on your system". This is usually bypassed by passing "--skip-torch-cuda-test" to launch.py but I don't see where launch.py gets used.

Nevermind, I got it working. I had to update some things in the dockerfile for torch, install some additional packages, edit the requirements file to get auto working. Still trying to sort out invokeai

@Coniface
Copy link

@tgm4883 could you please open a PR or share your modifications to fix the container?

@tgm4883
Copy link

tgm4883 commented Sep 18, 2023

@Coniface

I'll try to share that when I get home tonight. It's some fixes on the AMD fork and I know so little about SD that it might have other issues but it runs and works with the plugins I use.

@tgm4883
Copy link

tgm4883 commented Sep 19, 2023

I'm attaching the git diff I made. I also have a build script that builds and tags the image. I've only gotten the automatic1111 interface to work. Let me know if you have any questions.

TIMESTAMP=$(date +%Y%m%d.%H%M%S)
export BUILD_DATE=$TIMESTAMP
docker rm -f test-sd-auto-1 &>/dev/null || :
docker image rm -f sd:auto-amd-latest &>/dev/null || :
docker compose build auto-amd
docker tag sd:auto-amd-$BUILD_DATE sd:auto-amd-latest

Updated the file I uploaded to clean it up a little bit
20230918.txt

@justin13888
Copy link

As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.

I'd be interested in seeing whether others are working on something similar or have thoughts on this!

@cloudishBenne
Copy link

As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.

I'd be interested in seeing whether others are working on something similar or have thoughts on this!

Even though i also think the AMD docs are miserable out-of-date and i just can't understand why, you don't need to install any special rocm/hip system dependencies. They only thing needed is the special pytorch-rocm python package.
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6
PyTorch - get started locally

@tristan-k
Copy link

Any news on that matter? I'm searching for a way to run webui on a 680m.

@justin13888
Copy link

justin13888 commented Apr 5, 2024

As an update, I was able to run AUTOMATIC on Fedora 39 using rocm5.7.1 provided through repo and this version of torch and vision

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7

@justin13888
Copy link

Any news on that matter? I'm searching for a way to run webui on a 680m.

I have a laptop with the same chip as well but never tried. You have to make sure your architecture is supported by referring to the compatibility matrix (e.g. https://rocm.docs.amd.com/en/docs-5.7.1/release/gpu_os_support.html)

I also found somebody commenting about this in rocm repo: ROCm/ROCm#2932 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests