Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error transcribing file on line 'NoneType' object is not iterable #128

Open
ARaOnn opened this issue Apr 2, 2024 · 1 comment
Open

Error transcribing file on line 'NoneType' object is not iterable #128

ARaOnn opened this issue Apr 2, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@ARaOnn
Copy link

ARaOnn commented Apr 2, 2024

Which OS are you using?

  • OS: Windos 10

Short videos like songs work well.
However, longer videos (large in size) will cause errors.
As you suggested in #97, I tried downgrade the version of faster-whisper , or remove venv and reinstalling install.bat, but the same error occurs.

I have 16g vram, nvcc --version is below

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0

Error :

Error transcribing file on line 'NoneType' object is not iterable
D:\project\Whisper-WebUI\venv\lib\site-packages\torch\cuda\memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
Traceback (most recent call last):
File "D:\project\Whisper-WebUI\venv\lib\site-packages\gradio\queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
File "D:\project\Whisper-WebUI\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "D:\project\Whisper-WebUI\venv\lib\site-packages\gradio\blocks.py", line 1561, in process_api
result = await self.call_function(
File "D:\project\Whisper-WebUI\venv\lib\site-packages\gradio\blocks.py", line 1179, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\project\Whisper-WebUI\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\project\Whisper-WebUI\venv\lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "D:\project\Whisper-WebUI\venv\lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
File "D:\project\Whisper-WebUI\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "D:\project\Whisper-WebUI\modules\faster_whisper_inference.py", line 128, in transcribe_file
self.remove_input_files([fileobj.name for fileobj in fileobjs])
TypeError: 'NoneType' object is not iterable

@ARaOnn ARaOnn added the bug Something isn't working label Apr 2, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Apr 29, 2024

Hi, sorry for the late response.
This issue seems to be the same with #77, if it only happens when you upload a huge file.

I guess this is related to how gradio caches files, but this is hard to reproduce because it seems to be directly related to PC performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants