Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when turrning yolo with device=0: RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED #12712

Open
1 task done
tjasmin111 opened this issue May 15, 2024 · 1 comment
Labels
question Further information is requested

Comments

@tjasmin111
Copy link

Search before asking

Question

When I'm trying to use PyTorch with YOLOv8 with yolo detect predict ... device=0, I'm getting this error:

RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=, num_gpus=

But outputs of PyTorch usage and GPU availability as shown below looks good though.

What is the problem? How to fix it?

Some pytorch outputs:

Python 3.9.7 | packaged by conda-forge | (default, Sep  2 2021, 17:58:34) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.zeros(2).cuda(0)
tensor([0., 0.], device='cuda:0')
>>> print(torch.__version__)
2.3.0+cu118
>>> print(f"Is CUDA available?: {torch.cuda.is_available()}")
Is CUDA available?: True
>>> print(f"Number of CUDA devices: {torch.cuda.device_count()}")
Number of CUDA devices: 3
>>> device = torch.device('cuda')
>>> print(f"A torch tensor: {torch.rand(5).to(device)}")
A torch tensor: tensor([0.6085, 0.7618, 0.6855, 0.5276, 0.1606], device='cuda:0')

Full stack trace:

Traceback (most recent call last):
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 306, in _lazy_init
    queued_call()
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 174, in _check_capability
    capability = get_device_capability(d)
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 430, in get_device_capability
    prop = get_device_properties(device)
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 448, in get_device_properties
    return _get_device_properties(device)  # type: ignore[name-defined]
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=, num_gpus=

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/conda/bin/yolo", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/conda/lib/python3.9/site-packages/ultralytics/cfg/__init__.py", line 583, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "/home/conda/lib/python3.9/site-packages/ultralytics/engine/model.py", line 528, in val
    validator(model=self.model)
  File "/home/conda/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/conda/lib/python3.9/site-packages/ultralytics/engine/validator.py", line 126, in __call__
    device=select_device(self.args.device, self.args.batch),
  File "/home/conda/lib/python3.9/site-packages/ultralytics/utils/torch_utils.py", line 156, in select_device
    p = torch.cuda.get_device_properties(i)
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 444, in get_device_properties
    _lazy_init()  # will define _get_device_properties
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 312, in _lazy_init
    raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=, num_gpus=

CUDA call was originally invoked at:

  File "/home/conda/bin/yolo", line 5, in <module>
    from ultralytics.cfg import entrypoint
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/conda/lib/python3.9/site-packages/ultralytics/__init__.py", line 5, in <module>
    from ultralytics.data.explorer.explorer import Explorer
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/conda/lib/python3.9/site-packages/ultralytics/data/__init__.py", line 3, in <module>
    from .base import BaseDataset
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/conda/lib/python3.9/site-packages/ultralytics/data/base.py", line 15, in <module>
    from torch.utils.data import Dataset
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/conda/lib/python3.9/site-packages/torch/__init__.py", line 1478, in <module>
    _C._initExtension(manager_path())
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 238, in <module>
    _lazy_call(_check_capability)
  File "/home/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 235, in _lazy_call
    _queued_calls.append((callable, traceback.format_stack()))

Additional

No response

@tjasmin111 tjasmin111 added the question Further information is requested label May 15, 2024
@tjasmin111 tjasmin111 changed the title PyTorch: RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED Error when turrning yolo with device=0: RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED May 15, 2024
@glenn-jocher
Copy link
Member

Hello! Thanks for reaching out with the comprehensive details. It seems like a PyTorch-specific issue concerning GPU initialization when running the YOLOv8 model.

As a quick workaround, you might want to try explicitly setting the CUDA-visible devices before launching your script to ensure it’s detecting the right GPU index. Here's how you can set it via the command line:

export CUDA_VISIBLE_DEVICES=0
yolo detect predict model=yolov8n.pt source='your_image_or_video.jpg' device=0

This environment variable tells PyTorch to use only the specified GPU. Adjust the CUDA_VISIBLE_DEVICES index based on your environment (0 for the first GPU, 1 for the second, etc.).

If the error persists, make sure all your packages are up-to-date, especially torch and cuda, as sometimes mismatches can cause such issues. A reinstall or update may help:

pip install torch torchvision --upgrade

Please let us know if this helps or if you need further assistance! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants