Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with large files #98

Open
test01203 opened this issue Feb 26, 2024 · 2 comments
Open

Problem with large files #98

test01203 opened this issue Feb 26, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@test01203
Copy link

Hi, I tried to put a 2 GB movie in it, but it gets stuck in the middle and says an error
I tried several movies with the same glitch
I'd love for you to sort it out
I am working on Windows 11

@jhj0517
Copy link
Owner

jhj0517 commented Feb 26, 2024

Hi, I just tried to reproduce similar problem but failed.
But it's highly likely that the problem occurs from lack of GPU RAM or CPU performance.
So I recommend to use smaller model.
Or best approach would be to split the video into 2 or multiple files then proceed with transcribing each file separately.

@jhj0517 jhj0517 added the bug Something isn't working label Feb 26, 2024
@Tom-Neverwinter
Copy link

Tom-Neverwinter commented Mar 12, 2024

had this error then closed and re-opened, it took the file and processed it with no problem?

"Error transcribing file on line CUDA failed with error out of memory"

https://pastebin.com/wRVCpcep however my setup should have ample vram and system ram

finally caught it:

venv "C:\Users\Tom_N\Desktop\Whisper-WebUI\\venv\Scripts\Python.exe"
Use Faster Whisper implementation
Device "cuda" is detected
C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\gradio\components\dropdown.py:163: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: float16 or set allow_custom_value=True.
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
  File "C:\Users\Tom_N\miniconda3\lib\asyncio\events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "C:\Users\Tom_N\miniconda3\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
    self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\torch\cuda\memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
  warnings.warn(
Error transcribing file on line 'NoneType' object is not iterable
Traceback (most recent call last):
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\gradio\queueing.py", line 495, in call_prediction
    output = await route_utils.call_process_api(
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\gradio\blocks.py", line 1561, in process_api
    result = await self.call_function(
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\gradio\blocks.py", line 1179, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\Tom_N\Desktop\Whisper-WebUI\modules\faster_whisper_inference.py", line 127, in transcribe_file
    self.remove_input_files([fileobj.name for fileobj in fileobjs])
TypeError: 'NoneType' object is not iterable
Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
  File "C:\Users\Tom_N\miniconda3\lib\asyncio\events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "C:\Users\Tom_N\miniconda3\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
    self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants