You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When Open WebUI is configured with several connections to different Ollama servers running the same model (eg llama3:latest), it is impossible to determine which connection a model is running on in the model selection dropdown box on the chat. Similarly, it's impossible to know which connection particular response comes from.
Describe the solution you'd like
Allow nicknaming the connections (with pregenerated nickname for "local" connections, ie in the same docker container, or on the same machine"), and then show the nickname every time the model name is shown.
Describe alternatives you've considered
Alternatively, show the full connection string in the (i) tooltip.
Additional context
Here's an example of list of models from two different Ollama servers. There should be two entries for llama3:latest model from each connection. It is important to support this, because the two machines have wildly different capabilities (lapotop with Nvidia 3070 w/ 8G vs desktop with Nvidia 4090 w/ 24G)
The text was updated successfully, but these errors were encountered:
tjbck
changed the title
Support nicknaming connections to Ollama servers, and showing the nickname in the model selection dropdown
feat: ollama server naming support
Apr 27, 2024
Is your feature request related to a problem? Please describe.
When Open WebUI is configured with several connections to different Ollama servers running the same model (eg llama3:latest), it is impossible to determine which connection a model is running on in the model selection dropdown box on the chat. Similarly, it's impossible to know which connection particular response comes from.
Describe the solution you'd like
Allow nicknaming the connections (with pregenerated nickname for "local" connections, ie in the same docker container, or on the same machine"), and then show the nickname every time the model name is shown.
Describe alternatives you've considered
Alternatively, show the full connection string in the (i) tooltip.
Additional context
Here's an example of list of models from two different Ollama servers. There should be two entries for llama3:latest model from each connection. It is important to support this, because the two machines have wildly different capabilities (lapotop with Nvidia 3070 w/ 8G vs desktop with Nvidia 4090 w/ 24G)
The text was updated successfully, but these errors were encountered: