Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error transcribing file on line parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device #112

Open
liaodong opened this issue Mar 10, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@liaodong
Copy link

load models models/Whisper/faster-whisper
Error transcribing file on line parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
Traceback (most recent call last):
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/queueing.py", line 501, in call_prediction
output = await route_utils.call_process_api(
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/route_utils.py", line 253, in call_process_api
output = await app.get_blocks().process_api(
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/blocks.py", line 1704, in process_api
data = await anyio.to_thread.run_sync(
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/blocks.py", line 1460, in postprocess_data
self.validate_outputs(fn_index, predictions) # type: ignore
File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in validate_outputs
raise ValueError(
ValueError: An event handler (transcribe_file) didn't receive enough output values (needed: 2, received: 1).

@jhj0517 jhj0517 added the bug Something isn't working label Mar 10, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Mar 10, 2024

Hi, it seems to be a CUDA error. Can you try

nvcc --version

in the cmd and show the result? If you're using CUDA versions below 12.0, faster-whisper is not compatible.

@liaodong
Copy link
Author

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jul_11_02:20:44_PDT_2023
Cuda compilation tools, release 12.2, V12.2.128
Build cuda_12.2.r12.2/compiler.33053471_0

It looks matched

@jhj0517
Copy link
Owner

jhj0517 commented Mar 11, 2024

According to here,
Your GPU architecture is incompatible with CUDA 12.
Can you try reinstaliing torch

# After activate venv! 
pip install --force-reinstall torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/

as the post says?

You have to activate the venv before reinstalling, you can activate the venv with

  • Windows
    venv\Scripts\activate.bat or venv\Scripts\Activate.ps1
  • Linux or MacOS
    source venv/bin/activate

@liaodong
Copy link
Author

I tried 1.12.1, and no matter what version, it's the same error. Maybe it is other reason?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants