v0.6.1
Smaller patch release with some nice improvements and two new contributors 馃檶
Highlights
- Tokenizer no longer requires a HubApi request to succeed if the files are already downloaded
- This was a big request from the community and should enable offline transcription as long as everything is downloaded already
- Also made the function public so you can bundle the tokenizer with the app along with the model files
- @smpanaro found a really nice speedup across the board by using IOSurface backed MLMultiArrays
- Especially noticeable on older devices
- General cleanup, including a nice bug fix from @couche1 when streaming via the CLI
What's Changed
- Memory and Latency Regression Tests by @Abhinay1997 in #99
- @Abhinay1997 is building out this regression test suite so we can be sure we're always shipping code that has the same or better speed, accuracy, memory, etc
- Fix audio file requirement for streaming mode by @couche1 in #121
- Use IOSurface-backed MLMultiArrays for float16 by @smpanaro in #130
- Cleanup by @ZachNagengast in #132
New Contributors
Full Changelog: v0.6.0...v0.6.1