Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mkl_malloc: failed to allocate memory #797

Open
samuelbraun04 opened this issue Apr 21, 2024 · 2 comments
Open

mkl_malloc: failed to allocate memory #797

samuelbraun04 opened this issue Apr 21, 2024 · 2 comments

Comments

@samuelbraun04
Copy link

samuelbraun04 commented Apr 21, 2024

Running

model = WhisperModel("large-v3", device="cuda", compute_type="float16")

gives me:

mkl_malloc: failed to allocate memory

I have 32 GB of installed DDR5 memory and I'm running this on a 4080 with 16 GB of VRAM. I'm not really sure how this error keeps coming up, since I have tons of RAM. Anyone else experiencing/experienced the same thing?

@unmotivatedgene
Copy link

@samuelbraun04 Mine has started doing this was well occasionally after working fine previously. Did you find a resolution? My guess is torch versions.

@samuelbraun04
Copy link
Author

samuelbraun04 commented May 11, 2024

@unmotivatedgene Unfortunately I never figured a fix out, so I thought maybe it was just something I messed up on my end. But it's coming out of nowhere again for me and I haven't changed anything, so now I'm just confused. And I'm literally running this on a 4080 so I really don't get how I could be running out of memory.

@samuelbraun04 samuelbraun04 reopened this May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants