Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch no longer supports GPU because it is too old #57

Open
fj-de-torres opened this issue Jul 13, 2023 · 1 comment
Open

PyTorch no longer supports GPU because it is too old #57

fj-de-torres opened this issue Jul 13, 2023 · 1 comment

Comments

@fj-de-torres
Copy link

After following this tutorial to install pytorch: https://www.linode.com/docs/guides/pytorch-installation-ubuntu-2004/

print(torch.cuda.is_available)

gives:

<function is_available at 0x7f1517429700>

as an answer. I suppose I can understand it as "true".

But when running semantra, it gives me some errors and I can't make it work nor know what to do with those messages:

/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/torch/cuda/__init__.py:152: UserWarning: 
Found GPU0 NVIDIA GeForce GT 740M which is of cuda capability 3.5.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.

warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))
powershell.pdf:   0%|                                     | 0/1 [00:02<?, ?it/s]
Traceback (most recent call last):                                              
 File "/home/francisco/.local/bin/semantra", line 8, in <module>
 sys.exit(main())
 File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/click/core.py", line 1157, in __call__
 return self.main(*args, **kwargs)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/semantra/semantra.py", line 619, in main
documents[fn] = process(
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/semantra/semantra.py", line 304, in process
flush_pool()
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/semantra/semantra.py", line 272, in flush_pool
embedding_results = model.embed(tokens, pool)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/semantra/models.py", line 309, in embed
model_output = self.model(
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/transformers/models/xlm_roberta/modeling_xlm_roberta.py", line 827, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
File "/home/francisco/.local/pipx/venvs/semantra/lib/python3.8/site-packages/transformers/modeling_utils.py", line 911, in get_extended_attention_mask
extended_attention_mask = extended_attention_mask.to(dtype=dtype)  # fp16 compatibility
RuntimeError: CUDA error: no kernel image is available for execution on the device
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
@freedmand
Copy link
Owner

Can you run print(torch.cuda.is_available()) instead (you didn't have the () in your call)?

You may need to install kernel drivers for your GPU to bring it up to date. How to do that would depend on your GPU and OS. You could also try cloning the repo locally and downgrading the torch version and install with pip install -e . to see if that would work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants