Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't get filtermultidoc demo w/LanceDocChatAgent working without OpenAI API auth attempt... #8

Open
fat-tire opened this issue Mar 9, 2024 · 2 comments

Comments

@fat-tire
Copy link

fat-tire commented Mar 9, 2024

Looking at filter-multi-doc.py and trying to use with an all local LLM without any openai involvement...

First I had to change the embed_cfg to not use OpenAIEmbeddings:, so changed to:

# Configs
embed_cfg = SentenceTransformerEmbeddingsConfig()

(oh, also had to install langroid with pip install -U langroid[litellm,hf-embeddings])

This worked, and I got it to load in the wikipedia page and everything. But now something in this line is still trying to authenticate to OPENAI:

   task = LanceRAGTaskCreator.new(agent, interactive=True)

I'm trying to look up the call chain to see what exactly is still depending on OpenAI, but figured maybe I should just ask.

I'm not registering for an OpenAI api key and it seems like most of the demos are relying on it, even the ones I would think woudn't...

Using OpenChat 3.5 fwiw, but the log says:

Starting Agent LancePlanner (1) gpt-4-turbo-preview

Gpt-4 preview? Why? the llm_config seems to be set up right for the local llm, and is referenced correctly in DocChatAgentConfig so I'm not sure why it's going to gpt-4 unless something is hardcoded somewhere (here?)

How should the file be modified to use only local embedders, vectordbs, and whatever-its-doing-when-you-run-the-agents?

Thanks!

PS-- there's a typo: "reqdy" in a few of the examples.

@pchalasani
Copy link
Contributor

@fat-tire somehow I didn't get a notification for this issue and noticed it just now. If you were finding that DocChatAgent or variants were still trying to use OpenAI despite the configs using local llm and HF embeds, it was probably because the DocChatAgent uses another agent for Relevance Extraction, and the default LLM config for DocChatAgent was not being passed to this agent. This has since been fixed, so let me know if you still have this issue

@fat-tire
Copy link
Author

sounds good, thanks! Congrats on version 0.1.234 :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants