You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been using ai with mlc-llm (core lib powering web-llm) by running as a server by implementing a provider (very similar to the Mistral one) but I want to just run the LLM in the browser so there is less infra, and it can still leverage client's GPU
Anyone tried to use ai with web-llm?
The text was updated successfully, but these errors were encountered:
AI SDK core could also be used on the client side. That said, you are right, useChat etc require a server connection. What you could do is use e.g. AI SDK Core streamObject or streamText client side and then operate directly on those results (without useChat et al).
Is this sdk going to have better support for client side LLMs? Clients have more and more powerful AI accelerators, future apps will use both client and server side LLMs
Got it working but too much effort to make it work with all the generative UIs features atm which assume a lot of server side stuff.
Depending update of this issue will implement my own lib for client side generative UI
Feature Description
Compatibility with
https://github.com/mlc-ai/web-llm
Use Case
Running LLM in the browser, no need a server
Additional context
I've been using
ai
withmlc-llm
(core lib poweringweb-llm
) by running as a server by implementing a provider (very similar to the Mistral one) but I want to just run the LLM in the browser so there is less infra, and it can still leverage client's GPUAnyone tried to use
ai
withweb-llm
?The text was updated successfully, but these errors were encountered: