Skip to content

rahulsharma00/PDF-Question-Answering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PDF-Question-Answering

AskYourPDF is a powerful Python application built with Streamlit and LangChain, designed to make PDF documents interactive and easily queryable. This project leverages LangChain's capabilities, including text splitting, embeddings, and vector stores, to enhance the user experience when working with PDFs. Whether you want to perform a similarity search, retrieve top-k chunks, or submit questions to language models like OpenAI or Falcon-7B, AskYourPDF streamlines the process with an intuitive and user-friendly interface.

Note - You need OpenAI API and HuggingFace API to run this application

Store them into a .env file

How it works

Utilizing FAISS vector database, our application processes PDFs, creating vector representations of text chunks using OpenAI embeddings. These vectors are efficiently stored in FAISS, enabling quick retrieval of semantically similar chunks in response to user queries. The selected chunks are then input into a Language Model (LLM) for generating contextually relevant responses. The application uses Streamlit to create the GUI and Langchain to deal with the LLM.

Usage

To use the application, run the respective .py files with the streamlit CLI (after having installed streamlit):

streamlit run app.py

Falcon API currently experiences extended question-answering response times, averaging between 10-15 minutes. This delay may be attributed to potential server overloads on the Falcon API side.

Note There are 4 files in the folder

1. gpt_falcon.py

This helps you choose between the openai and the falcon llm to do pdf question answering. Choose one of the llm and then upload the file on which you want to ask questions on. Here is the output picture: Screenshot 2023-12-23 at 11 40 56 AM Screenshot 2023-12-23 at 11 58 29 AM

2. gpt_with_chunks.py

This code provides the following output:

  1. Chunks with Similar Context/Meaning as the Question: Provides chunks of text identified with context or meaning similar to the user's question.
  2. Top 3 Chunks Similar to the Question: Displays the three most relevant text chunks related to the user's question.
  3. Answer from the LLM (Language Model): Outputs the question's answer generated by the Language Model.
  4. Determining 'k' Value for Each Chunk Retrieval: Presents the 'k' value, representing the length of each retrieved text chunk. Output pictures:
Screenshot 2023-12-23 at 12 05 13 PM Screenshot 2023-12-23 at 12 05 19 PM Screenshot 2023-12-23 at 12 05 24 PM Screenshot 2023-12-23 at 12 05 30 PM Screenshot 2023-12-23 at 12 05 38 PM

3. gpt.py

This code consists of the openai llm to do question answering. This does not consists of answering chunks with similar meaning or determining the k value of each chunks.Here is the output picture: Screenshot 2023-12-23 at 11 40 11 AM

4. falcon.py

This code consists of the falcon-7b-instruct llm to do question answering. This does not consists of answering chunks with similar meaning or determining the k value of each chunks. Here is the output picture: image

About

An AI-app that allows you to upload a PDF and ask questions about it. It uses OpenAI's LLMs to generate a response.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages