Skip to content

Latest commit

 

History

History
31 lines (18 loc) · 2.09 KB

README.md

File metadata and controls

31 lines (18 loc) · 2.09 KB

edge-whisper

https://edge-ai.yomo.run

Realtime transcribe by running whisper model on geo-distributed cloud.

geo

This showcase demonstrates real-time speech-to-text transcription using the Whisper model. The model is deployed across geographically distributed cloud infrastructure to ensure optimal performance and low latency for users around the world.

Users are automatically directed to the most suitable backend server based on their location. To determine your assigned backend and hardware configuration, simply ping edgeai.yomo.dev and check the returned IP address.

By leveraging this geographically distributed architecture, this showcase delivers fast, accurate, and reliable speech transcription for users globally.

Self-hosting

yomo.run edge ai inference demo

To deploy this real-time speech transcription system on your own infrastructure, follow these steps:

  1. Start the frontend: Runpnpm run dev to launch the frontend application, which provides the interface for simultaneous interpretation.
  2. Choose your backend: Backends are located in the ./backends/ directory and are built using YoMo. Each backend targets a specific type of AI infrastructure.
  3. Select and run the appropriate backend script:

Please note: These instructions assume you have the necessary dependencies like Whisper, Whisper.cpp and YoMo Framework installed. Refer to the project documentation for further details.

Development on Arm dev machine

follow instructions to run this demo on Arm-based processor dev machine.