Skip to content

Retrieval Augmented Generation (RAG) using LangChain Framework, FAISS vector store and FastEmbed text embedding model.

Notifications You must be signed in to change notification settings

RoydonTay/Seedly-Articles-RAG

Repository files navigation

Retrieval Augmented Generation with Seedly Articles

Outline

In this project, I built an RAG with a quantized llama2 model: llama-2-7b-chat.Q4_K_M.gguf downloaded from TheBloke/Llama-2-7B-Chat-GGUF. The documents used for RAG were retrieved via webscrapping of the Seedly Blog, which contains articles about personal finance. The articles I retrieved were mostly about purchasing property in Singapore and insurance policies. The aim is to develop a language model that is more contextually aware and capable of answering questions related to personal finance within the context of Singapore.

Method

I used Scrapy for scraping the articles, FastEmbed for conversion of text chunks into embeddings, and FAISS vector store for text storage and retrieval. Finally, I used LangChain to interface all the different components, from retrieval of text chunks to prompt structuring and chaining to achieve a desired output.

The final output is generated by chaining two prompts, the first one to summarize context provided (top 2 most similar text chunks to the question user asked), and the second one to generate actual response. This is to ensure the prompt used to generate responses does not become too long (if it contains full-length text chunks as context) till there is little context window left for generation of output.

Results

To assess the outputs of the RAG, I compared it to that of the base LLM. The implementation of RAG did not significantly change outputs from base LLM for some questions. This is probably because additional information provided by RAG in this project is retrieved from the internet, which could be similar to the data the base model was trained on. Comparison of outputs: Experiment_logs.txt

The chaining of prompts works as expected, with summarization of text chunks successfully added to second prompt. The method may improve model's response if it provides information previously unavailable to model.

Reflections

Doing this project, I learnt how to implement an RAG using LangChain, how to interface an LLM with prompt templates and prompt chaining using the LangChain Expression Language (LCEL), and how to use vector stores and embedding functions together with LangChain. I believe these foundational concepts of LLM applications development will allow me to build more complex LLM applications in the future.

About

Retrieval Augmented Generation (RAG) using LangChain Framework, FAISS vector store and FastEmbed text embedding model.

Topics

Resources

Stars

Watchers

Forks

Languages