Skip to content

faster-whisper 1.0.2

Latest
Compare
Choose a tag to compare
@trungkienbkhn trungkienbkhn released this 06 May 02:08
· 5 commits to master since this release
2f6913e
  • Add support for distil-large-v3 (#755)
    The latest Distil-Whisper model, distil-large-v3, is intrinsically designed to work with the OpenAI sequential algorithm.

  • Benchmarks (#773)
    Introduces functionality to measure benchmarking for memory, Word Error Rate (WER), and speed in Faster-whisper.

  • Support initializing more whisper model args (#807)

  • Small bug fix:

    • code breaks if audio is empty (#768)
    • Foolproof: Disable VAD if clip_timestamps is in use (#769)
    • make faster_whisper.assets as a valid python package to distribute (#774)
    • Loosen tokenizers version constraint (#804)
    • CUDA version and updated installation instructions (#785)
  • New feature from original openai Whisper project:

    • Feature/add hotwords (#731)
    • Improve language detection (#732)