Skip to content

iamaziz/mini_RAG_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Toy RAG example

A minimal example for (in memory) RAG with Ollama LLM.

Using Mixtral:8x7 LLM (via Ollama), LangChain (to load the model), and ChromaDB (to build and search the RAG index). More details in What is RAG anyway?

To run this example, the following is required:

  • Install Ollama.ai
  • download a local LLM: ollama run mixtral (requires at least ~50GB of RAM, smaller LLMs may work but I didn't test)
  • pip install -r requirements.txt (venv recommended)

Then run:

python mini_rag.py

Example

demo_mini_rag.mov

Simplified sequence

Untitled

source

About

A minimal example for in-memory RAG using ChromaDB and Ollama LLM

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages