New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for different language model backends/APIs #54
Comments
@EndingCredits completely agree. Thats on my list to work on -- any particular model you'd like to see? |
I've been playing with functionary, but a general system to roll your own backend interface based on e.g. json schema of conversation (as current OpenAI function calling does) should make it easy(-ish) to integrate things. There is a backend server for functionary here https://github.com/MeetKai/functionary though unfortunately it doesn't really work for ooba without a translation layer for formatting the prompt and dumping it into chat completion. |
This is very cool, i'm going to try and integrate this hopefully mid next week |
Let me know if you want a hand and I can have a look into this - only barrier for this is understanding the project architecture but a bit of time will solve that. If it helps, this is what I was using to parse the returned completion: import re
def parse_response(response_text):
# Add start of prompt back in
response_text = "<|from|> assistant\n<|recipient|> " + response_text.strip()
# Strip stop token
response_text = response_text.replace("<|stop|>", "")
# Parse text into discrete messages
# This is probably a bit fragile, but in general regex seems like a good solution.
# - based on solution from ChatGPT
messages = []
pattern = re.compile(r'<\|from\|> (.+?)<\|recipient\|> (.+?)<\|content\|> (.+?)(?=\s*<\|from\|>|$)', re.DOTALL)
for match in pattern.finditer(response_text):
current_message = {
"from": match.group(1).strip(),
"recipient": match.group(2).strip(),
"content": match.group(3).strip(),
}
messages.append(current_message)
#print("MESSAGES:", messages)
# Format messages into OpenAI-like assistant message with tool calls
assistant_message = { "role": "assistant", "content": "" }
tool_calls = []
for message in messages:
#assert message['from'] == 'assistant'
if message['recipient'] == 'all':
assistant_message['content'] = message['content']
else:
tool_call = {
"name": message['recipient'],
"arguments": message['content']
}
tool_calls.append({"function": tool_call})
if tool_calls:
assistant_message["tool_calls"] = tool_calls
return assistant_message |
@ashpreetbedi there is a "One to rule them all" solution. ;) |
@chymian thats really cool. Let me get started on that |
I realise the simple answer is to replace the hostname in the openai module, but it might be nice to add more support for various backends (as in, for the LLM part). There are a couple of open source function calling models and they are able to use similar querying methods.
The text was updated successfully, but these errors were encountered: