Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Requirements.txt #25

Open
Soumadip-Saha opened this issue Feb 18, 2024 · 0 comments
Open

Update Requirements.txt #25

Soumadip-Saha opened this issue Feb 18, 2024 · 0 comments

Comments

@Soumadip-Saha
Copy link

Update the requirements.txt for running in v100 GPUs in Colab. OpenAI has released a new version of triton 2.2.0 which is not compatible with V100 GPUs. I have faced this issue in my notebook and after checking it I had to apply a new version limit on torch. It should be:

torch>=2.1.0,<2.2.0

You can find the issue here.
The error was this:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
[<ipython-input-12-e4c6296ba548>](https://localhost:8080/#) in <cell line: 10>()
     12   start_time = time.time()
     13   with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
---> 14     output = model.generate(**model_inputs, max_length=500)[0]
     15   duration += float(time.time() - start_time)
     16   total_length += len(output)

25 frames
[/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py](https://localhost:8080/#) in ttgir_to_llir(mod, extern_libs, target, tma_infos)
    165     # TODO: separate tritongpu_to_llvmir for different backends
    166     if _is_cuda(target):
--> 167         return translate_triton_gpu_to_llvmir(mod, target.capability, tma_infos, runtime.TARGET.NVVM)
    168     else:
    169         return translate_triton_gpu_to_llvmir(mod, 0, TMAInfos(), runtime.TARGET.ROCDL)

IndexError: map::at

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant