Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Webchat Embed & Chat in the FlowiseAI UI in the Chatflow editor timeout #2380

Closed
qdrddr opened this issue May 9, 2024 · 3 comments
Closed

Comments

@qdrddr
Copy link

qdrddr commented May 9, 2024

Describe the bug
In the Webchat Embed & Chat in the FlowiseAI UI in the Chatflows editor canvas if using Ollama running locally, it may respond with a delay longer thatn for example OpenAI, then the chat times out and soesn't show the responce. However if I'm in the Chatflow editor, I can close the chat, and open it again then I can see the responce. While with Embed chat this workaround doesn't work.

To Reproduce
Create Chatflow with Ollama, export: qdrant2-W Chatflow.json

Expected behavior
To not timeout and display the responce without reloading the page or closing and re-opening the chat.

Screenshots
ScreenShot 2024-05-09 at 12 59 57

Flow
qdrant2-W Chatflow.json

Setup

  • Installation: k8s deployment from this helm chart, helm chart version 3.3.0, docker image from docker hub, tested with: flowiseai/flowise:1.6.6 & & 1.7.2
  • Flowise Versions tested: 1.6.6 & 1.7.2
  • OS: Ubuntu Linux 22.04
  • Browser: Tested in both Edge, Chrome, Safari: latest versions (macOS 14)

Additional context
I beleve there is some sort of timeout settings that causing this issue that needs to be adjusted. Could it be hardcoded?

@amansoniamazatic
Copy link

@qdrddr I think you're facing a similar issue to this one, #2291

@qdrddr
Copy link
Author

qdrddr commented May 10, 2024

I believe it relates to the fact that I'm running an Ollama locally self-hosted model, which may not be as fast and responsive as OpenAI.

Flowise needs to handle these situations appropriately. It can be beneficial for the Development process and be independent and a cost perspective to ensure people can run their models locally even if they are slower and more responsive than commercially available APIs & Models.

Some RAG/Agrens may respond slowly, and that's fine; not everything should be in a "chat conversation form," I would be okay to wait for one or, in some cases, even 10 minutes for a response that generated and returned to me asynchronously, for example in an email, which would be okay for me. I believe the Flowise platform should let people build flows that return asynchronous responses.

@HenryHengZJ
Copy link
Contributor

will close this and continue on #2291

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants