-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Search] [Chat Playground] handle when the ActionLLM is not a ChatModel #183931
Conversation
💚 Build Succeeded
Metrics [docs]
To update your PR or re-run it, just comment with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, code review only
@@ -166,6 +167,15 @@ class ConversationalChainFn { | |||
}); | |||
} | |||
}, | |||
// callback for prompt based models (Bedrock uses ActionsClientLlm) | |||
handleLLMStart(llm, input, runId, parentRunId, extraParams, tags, metadata) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same lint issue?
const llm = isChatModel | ||
? new FakeListChatModel({ | ||
responses, | ||
}) | ||
: new FakeListLLM({ responses }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know if the lint is correct here? Looks like two different styles for two functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would say so otherwise CI would complain. Looks like its based on line width.
Summary
Two action based LLMs:
ActionsClientChatOpenAI
andActionsClientLlm
.ActionsClientChatOpenAI
is based on the ChatModel LLM,ActionsClientLlm
is a prompt based model. The callbacks are different when using a ChatModel vs LLMModel. Token count is done on the ChatModelStart callback. This meant the token count didn't happen for LLMModel based actions (Bedrock).To fix i listen on both callbacks.
Checklist
Delete any items that are not applicable to this PR.