Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Better error transparency from LLM response #1428

Open
zarlor opened this issue May 7, 2024 · 0 comments
Open

Feature Request: Better error transparency from LLM response #1428

zarlor opened this issue May 7, 2024 · 0 comments

Comments

@zarlor
Copy link

zarlor commented May 7, 2024

Currently no matter what the issue an LLM might have with a request Danswer seems to generally respond with simply "LLM failed to respond, have you set your API key?" when it can be easily verified, even by starting another chat session with the same assistant, that, in fact, the API key is set and everything would otherwise seem to work fine. I THINK what is happening is the LLM is returning some error message and that is Danswer's standard response back to the user. It would be nice to be able to see the actual response sent by the LLM to better troubleshoot what the error might be (I think most likely the situations I've run into with this are just context is full for the overall chat session, but I'm not positive if that is the case or not, hence this request).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant