New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama Local model issue after update #494
Comments
Same problem for me! Hopefully this get's fixed soon, I was really excited to try this out. |
i am also face the same problem |
Also for me with a 3090 |
Same here. |
now you can update the inference timeout via the settings page. fetch latest changes. |
after clone yesterday version the local model can't be detected to reply even one or two steps like the previous version
24.04.26 21:38:46: root: ERROR : Inference took too long. Model: OLLAMA, Model ID: llama3
24.04.26 21:38:46: root: INFO : SOCKET inference MESSAGE: {'type': 'error', 'message': 'Inference took too long. Please try again.'}
24.04.26 21:38:46: root: WARNING: Inference failed
The text was updated successfully, but these errors were encountered: