Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change onnxruntime requirement to gpu version and update VAD to run on gpu #499

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

thomasmol
Copy link

See discussions here: pyannote/pyannote-audio#1481, #493, #364 (comment)

This pull requests lets the VAD run on gpu using onnxruntime-gpu rather than onnxruntime. There are some issues when depending on both packages: it will default to the cpu version if both are installed. This is mostly a problem when running faster-whisper in conjunction with pyannote.audio (or other libraries that specifically need to run on gpu using onnxruntime-gpu).

@celliso1
Copy link

Does onnxruntime-gpu fall back to CPU support if there is no GPU? Not everyone is using CUDA, some are using CPU.

@thomasmol
Copy link
Author

Unfortunately it does not, so I don't think this pull request will get accepted. I'll leave it up for now if anyone runs into the same issue I experienced that led me to create this pull request.

@celliso1
Copy link

celliso1 commented Oct 25, 2023

Per https://onnxruntime.ai/docs/execution-providers/, you can set multiple Execution Providers. I'm not savvy enough today to try this myself, but would it fix the problem?

import onnxruntime as rt

#define the priority order for the execution providers
# prefer CUDA Execution Provider over CPU Execution Provider
EP_list = ['CUDAExecutionProvider', 'CPUExecutionProvider']

# initialize the model.onnx
sess = rt.InferenceSession("model.onnx", providers=EP_list)

@Purfview
Copy link
Contributor

FYI, you shouldn't run it on CUDA as the model is not meant to run on it.

Benchmark on ~2h audio with RTX4090:

CUDA: 72.22 seconds
CPU: 15.15 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants