You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have recently changed the server and with this I have the possibility of an intel i5-10th iGPU and a nvidia GTX 1650
I am installing openai-whisper with the faster_whisper module, this is the docker-compose
I was hoping to be able to use the large model but I think it is too big, in fact I get the error CUDA failed with error out of memory, it seems like it wants to dump it all into the video card ram, maybe what I am asking is impossible, but I don't know how the whole system works and I don't see mounted volumes, so I ask hoping the question is not too stupid, can't you download the model locally and use it without the need for it all to be loaded into memory? Or is there a way to share the system ram with the video card ram when needed?
J
The text was updated successfully, but these errors were encountered:
Hi, I have recently changed the server and with this I have the possibility of an intel i5-10th iGPU and a nvidia GTX 1650
I am installing openai-whisper with the faster_whisper module, this is the docker-compose
I was hoping to be able to use the
large
model but I think it is too big, in fact I get the errorCUDA failed with error out of memory
, it seems like it wants to dump it all into the video card ram, maybe what I am asking is impossible, but I don't know how the whole system works and I don't see mounted volumes, so I ask hoping the question is not too stupid, can't you download the model locally and use it without the need for it all to be loaded into memory? Or is there a way to share the system ram with the video card ram when needed?J
The text was updated successfully, but these errors were encountered: