New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use chat models properly (prompt tags already fixed) #480
Comments
@AlessandroSpallina please comment so I can assign you. Thanks :) |
i’m here! |
Looks to me Langchain is already doing this, we probably can rely on it by passing chat history and system prompt as The API for def llm(self, prompt, chat=False, stream=False):
# here we retrieve `chat_history` from working memory and convert it to langchain objects
pass Not sure about the |
I wanna help with this but:
#Zephyr llm
template="<|system|>\nYou are a helpful assistant that translates {input_language} to {output_language}</s>\n"
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="<|user|>\n{text}</s>\n"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
final_part_prompt = ChatPromptTemplate.from_template("<|assistant|>\n")
final_prompt = chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt,final_part_prompt])
chain = LLMChain(llm=llm, prompt=final_prompt)
out = chain.run(input_language="English", output_language="French", text="My family are going to visit me next week.")
class PromptTemplateTags:
"""Class that create Prompt from llm Template. Must be the exact same as the one provide to the llm model."""
templateTags: str
SystemTag: str
UserTag: str
def __init__(self, templateTags: str, SystemTag: str, UserTag: str):
self.templateTags = templateTags
self.SystemTag = SystemTag
self.UserTag = UserTag
def create_prompt(self, system_message: str = "", user_message: str = "") -> PromptTemplate:
prompt = self.templateTags.replace(self.SystemTag, system_message).replace(self.UserTag, user_message)
return prompt
prompt_model ="""<|system|>
{{ .System }}
</s>
<|user|>
{{ .Prompt }}
</s>
<|assistant|>"""
prompt = PromptTemplateTags(prompt_model, "{{ .System }}", "{{ .Prompt }}").create_prompt()
print(prompt) I gather all of this infos with very little time but i think we can define a good Design Base! |
To answer the 3. point, Why do we need to split the prompt and be hookable if we have already the prefix e suffix hooks?
(The Prompt Merge block is only for the schematic purpose!) |
@valentimarco thanks for the diagram looks reasonable, also the To be totally honest I am scared about all this fragmentation we have to deal with.
Can we focus on the last point? I mean inside here we can pass chat history from working memory directly to langchain ChatGPT and ChatOllama, as in here as you showed above. I know it's not peferct, but is the right direction without the risk of overengineering Thanks a lot for dedicating the time |
Maybe we can resolve with a temp plugin with
I agree with you, in few months this changes maybe be revert but i don't see any possible solution for a good customability rather than those explained early |
We saw that most of the runner:
Now we need to use chat models properly by:
|
Also Ollama now supports the OpenAI pseudo standard |
Yep, we need to wait a little more and we can use only one class for most of the runners! |
Work in progress in PR #783 |
At the moment we insert both system prompt (aka
prompt_prefix
) and conversation history in the prompt, without respecting model-specific prompt tags and treating every model as a completion model.Let's try to design and implement a solid way to both leverage prompt tags and chat models, as suggested by @AlessandroSpallina.
As an hypothesis, tags could be described in factory classes and used when
cat._llm
or the agent is used.Notes:
The text was updated successfully, but these errors were encountered: