Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

belle-whisper model take much more time even after transformed by ctranslate #583

Open
yurinapoleon opened this issue Apr 26, 2024 · 1 comment

Comments

@yurinapoleon
Copy link

I transformed belle-whisper-large-v2 by ctranslate2, the model size is almost same as faster-whisper-large-v2. But when the word_timestamp parameter is True, Belle took much more time(at least 3x, sometimes 10x) than the faster-whisper model. Is it normal?

I translate the model by the following command:
ct2-transformers-converter --model .\Belle-whisper-large-v2-zh\ --output_dir faster-belle-whisper-large-v2-zh --copy_files preprocessor_config.json --quantization float16

@shuaijiang
Copy link
Collaborator

it confused me. Belle-whisper is exactly same to whisper on model framework.
BTW, check the output length of belle-whisper and faster-whisper, maybe the length difference make the speed gap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants