You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there's no way to have multiple "profile" of LLMs (eg. saved system prompts / parameters / models), so even though you do have the ability to have saved prompts and quickly change models, you're always talking to the same "profile". There are "modelfiles" but from what I understood those are related to Ollama, and not to profiles for any model (eg. API-based).
Overview
One of the main benefits of having "profiles" is being able to route questions: if you have a system prompt for a screen writer with high temperature and high token for example it could be super useful for writing, but terrible for coding. When coding, you probably want a "you're a software developer expert ..." sort of thing, with a mid temperature and maybe high max tokens, etc. For powerful models, you may want more straight forward answers to reduce the token count, and for cheaper/faster models, you may want something different.
Proposed Solution
Having a new concept of "Agents" or "Profiles" or "Personas" or anything like that. To reduce implemention costs/time, that could be a menu, right above (or below) the "Prompts" menu, and could work in a very similar fashion, but on the "Create New" screen you would also be able to define the system prompt, the model and the model parameters. The experience could also be similar to the prompt /some-profile and that would load that profile.
Another option, would be have a more flexibility "Modelfiles" implementation, allowing to select not only Ollama-based models, but also API-based models.
Describe alternatives you've considered
Currently there's really no way to do this other than manually changing the system prompt and the model params. Lobe Chat has an interesting implementation in which they have no only the chat history but also the "Assistants" which are just saved sys prompts/models/params.
Additional context
Would be happy to contribute to this.
Let me know if this makes sense, or if I'm understanding the current Modelfiles implementation incorrectly, maybe it's already possible to do that and I couldn't figure how.
The text was updated successfully, but these errors were encountered:
Currently, there's no way to have multiple "profile" of LLMs (eg. saved system prompts / parameters / models), so even though you do have the ability to have saved prompts and quickly change models, you're always talking to the same "profile". There are "modelfiles" but from what I understood those are related to Ollama, and not to profiles for any model (eg. API-based).
Overview
One of the main benefits of having "profiles" is being able to route questions: if you have a system prompt for a screen writer with high temperature and high token for example it could be super useful for writing, but terrible for coding. When coding, you probably want a "you're a software developer expert ..." sort of thing, with a mid temperature and maybe high max tokens, etc. For powerful models, you may want more straight forward answers to reduce the token count, and for cheaper/faster models, you may want something different.
Proposed Solution
Having a new concept of "Agents" or "Profiles" or "Personas" or anything like that. To reduce implemention costs/time, that could be a menu, right above (or below) the "Prompts" menu, and could work in a very similar fashion, but on the "Create New" screen you would also be able to define the system prompt, the model and the model parameters. The experience could also be similar to the prompt
/some-profile
and that would load that profile.Another option, would be have a more flexibility "Modelfiles" implementation, allowing to select not only Ollama-based models, but also API-based models.
Describe alternatives you've considered
Currently there's really no way to do this other than manually changing the system prompt and the model params. Lobe Chat has an interesting implementation in which they have no only the chat history but also the "Assistants" which are just saved sys prompts/models/params.
Additional context
Would be happy to contribute to this.
Let me know if this makes sense, or if I'm understanding the current Modelfiles implementation incorrectly, maybe it's already possible to do that and I couldn't figure how.
The text was updated successfully, but these errors were encountered: