-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] This module is not using GPU at all. #632
Comments
The same problem here. I've installed onnxruntime-gpu and rembg[gpu] successfully, but it doesn't use gpu. |
I think this is an issue with how onnxruntime-gpu installs. Try installing your usual dependencies and then afterwards run |
I have tried all possible ways actually. 1. Pip Installing rembg automatically disables support for GPU For instance, if I
So I pip installed the base rembg[gpu] without any of the other dependencies, and then manually installed all the dependencies without onnxruntime:
Doing this shows that
So clearly, normal pip installing rembg[gpu] naturally removes 'CUDAExecutionProvider' to begin with and falls back to CPU support only... Same result if I 2. Tried using torch 2.1 and CUDA 11
But, nope still no GPU usage. 3. Hardcoded 'CUDAExecutionProvider' into rembg_session.inner_session._providers
Ouput:
The EP Error is not making any sense also, because I already installed the "TensorRT libraries as mentioned in the GPU requirements page" but anyways it falls back to ['CUDAExecutionProvider', 'CPUExecutionProvider']. But somehow, during runtime, the provider falls back to 'CPUExecutionProvider' only once again and no GPU usage. I tried to inspect all the codefiles on how the provider is selected but I still could not figure out this issue... |
same issue |
Same issue here. So only CPU is working for Jetson |
Here is my solution:
# based on cuda 11.8, other versions may have compatibility issues
# for cudnn version check https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
# and https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-895/install-guide/index.html
sudo apt-get install libcudnn8=8.9.2.26-1+cuda11.8 -y
# directly installing rembg[gpu] will cause dependency issues
# so install it separately
pip3 install onnxruntime-gpu==1.18.0
pip3 install rembg[gpu]==2.0.50 --no-deps
pip3 install numpy opencv-python-headless pillow pooch pymatting scikit-image scipy tqdm
# test & enjoy
python3 -c "from rembg import remove, new_session; from PIL import Image; output = remove(Image.open('i.png'), session=new_session('u2net', ['CUDAExecutionProvider'])); output.save('o.png')" |
Solution still doesn't work buddy @Mr47hsy 😭 |
@KVig122 Try to identify the specific cause by: import onnxruntime as ort
print(f"onnxruntime device: {ort.get_device()}") # output: GPU
print(f'ort avail providers: {ort.get_available_providers()}') # output: ['CUDAExecutionProvider', 'CPUExecutionProvider']
ort_session = ort.InferenceSession('/root/.u2net/u2net.onnx', providers=["CUDAExecutionProvider"])
print(ort_session.get_providers()) |
Same error here. Excellent results, but CPU only. |
I also saw that there were no spikes at all on my GPU RAM graph.
I am running this code on GColab with T4 runtime, here is the link to a project [Video Background Removal] I have created for my non-tech colleagues: https://colab.research.google.com/drive/16AslpibFerebpJXULY0C8oCPH8Dqrvim?usp=sharing
I have GColab pro and tried it on other GPUs too, but rembg did not use any GPU at all.
The text was updated successfully, but these errors were encountered: