A full-stack application that enables you to turn any text document or piece of content into context that any LLM can use as references during chatting. This application allows you to add and delete documents and can create multiple knowledge bases.
This project is more than just a chatbot. I want to learn how to code a production-ready chatbot using the OpenAI API. Through my research, I found my way to accomplish this goal. Additionally, I've been learning about VectorDB and Langsmith, which I have extensively incorporated into this project.
- Chat with text files
- Chat streaming
- Upload multiple Text file
- Maintainable vector DB (add, delete files)
- Static API Token Authentication for ChromaDB
- Create more than one knowledge base
demo-qna-with-rag-1.mp4
For future development, I would like to use React.js for the frontend and enhance the functionality of ChromDB. This includes features such as filtering data, deleting files through search, and most importantly, adding memory capabilities for the chatbot.
Clone the repository
git clone git@github.com:Ja-yy/QnA-With-RAG.git
Set up this environment variable in .env
file
OPENAI_API_KEY='<open_ai_key>'
EMBEDDING_MODEL='text-embedding-ada-002'
CHAT_MODEL='gpt-3.5-turbo'
TEMPERATURE=0
MAX_RETRIES=2
REQUEST_TIMEOUT=15
CHROMADB_HOST="chromadb"
CHROMADB_PORT="8000"
CHROMA_SERVER_AUTH_CREDENTIALS="<test-token>"
CHROMA_SERVER_AUTH_CREDENTIALS_PROVIDER="chromadb.auth.token.TokenConfigServerAuthCredentialsProvider"
CHROMA_SERVER_AUTH_PROVIDER="chromadb.auth.token.TokenAuthServerProvider"
CHROMA_SERVER_AUTH_TOKEN_TRANSPORT_HEADER="AUTHORIZATION"
Build and run docker image
docker-compose up -d --build
Now, go to localhost:8501
Enjoy the app :)