Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does it work with when chains are passed with return_source_documents=True, #19

Open
sharrajesh opened this issue Jun 2, 2023 · 6 comments

Comments

@sharrajesh
Copy link

I seem to be getting an whenever my chains have above config....

@msoedov
Copy link
Owner

msoedov commented Jun 2, 2023

Hi @sharrajesh! Thx for reporting the issue! Could you please share your code snippet?

@sharrajesh
Copy link
Author

faiss_db_path = os.path.join(BASE_DIR, "workspace", "chat_with_text_files", "faiss_db")

def load_db():
embeddings = OpenAIEmbeddings()
vectordb = FAISS.load_local(faiss_db_path, embeddings)
retriever = vectordb.as_retriever()
return retriever

def create_qa_chain(retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
verbose=True,
)
return qa_chain

def cite_sources(llm_response):
print(llm_response["result"])
print("\n\nSources:")
for source in llm_response["source_documents"]:
print(source.metadata["source"])

retriever = load_db()
qa_chain = create_qa_chain(retriever)

@sharrajesh
Copy link
Author

Another error
AttributeError: 'ConversationalRetrievalChain' object has no attribute 'input_key'. Did you mean: 'input_keys'?

def create_qa_chain(retriever):
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chat_llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=OPENAI_TEMPERATURE)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=chat_llm,
retriever=retriever,
chain_type="stuff",
verbose=True,
memory=memory,
)
return qa_chain

def cite_sources(llm_response):
# Check if 'answer' key exists in the llm_response.
if "answer" in llm_response:
print(llm_response["answer"])

# Check if 'source_documents' key exists in the llm_response.
if "source_documents" in llm_response:
    print("\n\nSources:")
    for source in llm_response["source_documents"]:
        print(source.metadata["source"])

retriever = load_db()
qa_chain = create_qa_chain(retriever)

@msoedov
Copy link
Owner

msoedov commented Jun 6, 2023

Thx! Received all info. I have a local patch but still requires testing.

@msoedov
Copy link
Owner

msoedov commented Jul 12, 2023

Fixed this issue in 0.0.12

@joinalahmed
Copy link

this does not work with ConversationalRetrievalChain , please provide a fix as RetrievalQA is being depricated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants