You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The backend works, but it is non-obvious how to get it working. There are two major problems with the error handling for the ollama backend.
CORS prevents the application from talking with ollama
The ollama model tag could not be found
These two error cases need a dedicated error message each.
The error handling should detect potential CORS issues if possible and tell the user to set the set the OLLAMA_ORIGINS env variable. It is also not obvious what the CORS values should be when you are running this locally. I couldn't get it working.
The error handling should also detect the case that the model does not exist or has not yet been downloaded. If possible, you could query ollama for a list of already downloaded models and show them in a dropdown with typeahead.
In general, the menu for setting up the ollama should be able to check ollama without the user having to leave the menu and entering a message for the chatbot. Perhaps you can put a button "Test ollama connection" button or something similar.
I don't want to give the impression that this project only has flaws by the way. Overall I am impressed with this project and find amica to be extremely enjoyable. When I create issues and point out bugs, it is because I am satisfied with everything except the bug in question. The ollama backend works, but my hardware is a bit slow when it comes to prompt ingestion (processing the initial prompt). Generating tokens even at a measily 7 tokens/s locally is fast enough to feed the text to speech engine on amica.arbius.ai.
The text was updated successfully, but these errors were encountered:
The backend works, but it is non-obvious how to get it working. There are two major problems with the error handling for the ollama backend.
These two error cases need a dedicated error message each.
The error handling should detect potential CORS issues if possible and tell the user to set the set the OLLAMA_ORIGINS env variable. It is also not obvious what the CORS values should be when you are running this locally. I couldn't get it working.
The error handling should also detect the case that the model does not exist or has not yet been downloaded. If possible, you could query ollama for a list of already downloaded models and show them in a dropdown with typeahead.
In general, the menu for setting up the ollama should be able to check ollama without the user having to leave the menu and entering a message for the chatbot. Perhaps you can put a button "Test ollama connection" button or something similar.
I don't want to give the impression that this project only has flaws by the way. Overall I am impressed with this project and find amica to be extremely enjoyable. When I create issues and point out bugs, it is because I am satisfied with everything except the bug in question. The ollama backend works, but my hardware is a bit slow when it comes to prompt ingestion (processing the initial prompt). Generating tokens even at a measily 7 tokens/s locally is fast enough to feed the text to speech engine on amica.arbius.ai.
The text was updated successfully, but these errors were encountered: