-
-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add streaming support for runs #5
Comments
Hello! We solved the stream problem bypassing it. Our solution is probably not the best, but it will do as a temporary solution. What we did: we passed the callback to the runner’s chat model and received chunks with which we update the message in the prism database as updates arrive.
Also, we slightly changed the order of adding a message entry to the database. So now the answer from the assistant appears immediately, and then is updated. On the frontend service we use visual character pooling to update new text. So, the frontend service updates the message once a second, but the user sees a smooth set (as far as possible in accordance with the current load). If our solution is satisfactory, we can prepare a pull request. Or it can wait for more optimal solutions to emerge. |
Would be interesting to see the community come up with what this should look like from a purely end user UX perspective. To me the end user would be able to pass |
This isn't supported in the official OpenAI API yet, but it was mentioned at the OpenAI dev day that it will be coming soon, possibly via websocket and/or webhook support.
See this related issue in the OpenAI developer community.
The toughest part of this is that the
runner
is completely disparate from the HTTPserver
, as it should be, to process thread runs in an async task queue. Therunner
is responsible for making chat completion calls, which are streamable, so we'd have to either:createRun
orcreateThreadAndRun
operations, and then pipe the chat completion calls into this streamcreateRun
/createThreadAndRun
The text was updated successfully, but these errors were encountered: