Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] This module is not using GPU at all. #632

Open
KVignesh122 opened this issue May 16, 2024 · 9 comments
Open

[BUG] This module is not using GPU at all. #632

KVignesh122 opened this issue May 16, 2024 · 9 comments
Labels
bug Something isn't working

Comments

@KVignesh122
Copy link

rembg_session = rembg.new_session()
rembg.remove(data=Image.open(input_path), session=rembg_session, only_mask=True)

rembg_session.inner_session.get_providers() # prints out ['CPUExecutionProvider']

I also saw that there were no spikes at all on my GPU RAM graph.

I am running this code on GColab with T4 runtime, here is the link to a project [Video Background Removal] I have created for my non-tech colleagues: https://colab.research.google.com/drive/16AslpibFerebpJXULY0C8oCPH8Dqrvim?usp=sharing

I have GColab pro and tried it on other GPUs too, but rembg did not use any GPU at all.

@KVignesh122 KVignesh122 added the bug Something isn't working label May 16, 2024
@ProgrammingLife
Copy link

ProgrammingLife commented May 21, 2024

The same problem here. I've installed onnxruntime-gpu and rembg[gpu] successfully, but it doesn't use gpu.
Maybe we should run it in some other way than just rembg i ...?

@jalsop24
Copy link
Contributor

I think this is an issue with how onnxruntime-gpu installs.

Try installing your usual dependencies and then afterwards run pip install --force-reinstall onnxruntime-gpu

@KVignesh122
Copy link
Author

I have tried all possible ways actually.

1. Pip Installing rembg automatically disables support for GPU
Installing rembg[gpu] automatically installs both onnxruntime-gpu AND onnxruntime. I read somewhere that maybe this may be causing a conflict of packages issue and in turn disabling GPU support.

For instance, if I pip install rembg[gpu] as suggested and run this code:

import torch
import onnxruntime as ort
print(ort.get_available_providers()) # Available providers are ['AzureExecutionProvider', 'CPUExecutionProvider']

So I pip installed the base rembg[gpu] without any of the other dependencies, and then manually installed all the dependencies without onnxruntime:

pip install onnxruntime-gpu
pip install rembg[gpu] --no-deps
pip install jsonschema numpy opencv-python-headless pillow pooch pymatting scikit-image scipy tqdm

Doing this shows that

import torch
import onnxruntime as ort
print(ort.get_available_providers()) # Available providers are ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']

So clearly, normal pip installing rembg[gpu] naturally removes 'CUDAExecutionProvider' to begin with and falls back to CPU support only... Same result if I pip install --force-reinstall onnxruntime-gpu @jalsop24 :(

2. Tried using torch 2.1 and CUDA 11
Understanding this, I also read that maybe there is a problem with CUDA12 support in PyTorch version 2.2.x so I forced the program to run on PyTorch 2.1.2 with CUDA version 11.8 instead:

pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118
pip install nvidia-cudnn-cu11
pip install tensorrt-cu11
import torch
print(torch.__version__) # Prints out 2.1.2+cu118

But, nope still no GPU usage.

3. Hardcoded 'CUDAExecutionProvider' into rembg_session.inner_session._providers
Tried this too:

rembg_session = rembg.new_session()
rembg_session.inner_session._providers = ['CUDAExecutionProvider']
print(rembg_session.inner_session.get_providers())

Ouput:

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
['CUDAExecutionProvider']

The EP Error is not making any sense also, because I already installed the "TensorRT libraries as mentioned in the GPU requirements page" but anyways it falls back to ['CUDAExecutionProvider', 'CPUExecutionProvider']. But somehow, during runtime, the provider falls back to 'CPUExecutionProvider' only once again and no GPU usage. I tried to inspect all the codefiles on how the provider is selected but I still could not figure out this issue...

@Bouts2019
Copy link

same issue

@Bonitodelcapo
Copy link

Same issue here.
Checked the installation matrix and installed the provided wheels for Jetson from the Jetson Zoo.
Then i pip installed rembg[gpu], which as dependency has onnxruntime, so it installed also the cpu package.
I tried to pip uninstall the onnxruntime package and to force reinstall the onnxruntime-gpu but then rembg gets lost in a loop by the import.

So only CPU is working for Jetson

@Mr47hsy
Copy link

Mr47hsy commented May 29, 2024

Here is my solution:

Based On @KVignesh122 's works. thank you buddy!

# based on cuda 11.8, other versions may have compatibility issues
# for cudnn version check https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
# and https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-895/install-guide/index.html
sudo apt-get install libcudnn8=8.9.2.26-1+cuda11.8 -y
# directly installing rembg[gpu] will cause dependency issues
# so install it separately
pip3 install onnxruntime-gpu==1.18.0
pip3 install rembg[gpu]==2.0.50 --no-deps
pip3 install numpy opencv-python-headless pillow pooch pymatting scikit-image scipy tqdm

# test & enjoy
python3 -c "from rembg import remove, new_session; from PIL import Image; output = remove(Image.open('i.png'), session=new_session('u2net', ['CUDAExecutionProvider'])); output.save('o.png')"

@KVig122
Copy link

KVig122 commented May 29, 2024

print(new_session('u2net', ['CUDAExecutionProvider'])).inner_session.get_providers())
# You still get ['CPUExecutionProvider']

Solution still doesn't work buddy @Mr47hsy 😭

@Mr47hsy
Copy link

Mr47hsy commented May 29, 2024

@KVig122
hi , it was worked in my side:
image

Try to identify the specific cause by:

import  onnxruntime as ort

print(f"onnxruntime device: {ort.get_device()}") # output: GPU
print(f'ort avail providers: {ort.get_available_providers()}') # output: ['CUDAExecutionProvider', 'CPUExecutionProvider']

ort_session = ort.InferenceSession('/root/.u2net/u2net.onnx', providers=["CUDAExecutionProvider"])
print(ort_session.get_providers())

@WindowsNT
Copy link

Same error here. Excellent results, but CPU only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants