The primary objective of this repo is to explore setting up Llama 2 to run locally and LLM development frameworks and libraries to provide a foundational runtime environment that can run on on Laptop for further more advance development.
This repository is structured as a series of progressive learning steps and tutorials. Each step is designed to provide a simple approach of getting started or baseline development environment setup for further development with LLMs running locally. Below is an outline of the learning path:
- Overview: Get Meta Llama 2, quantize it, and setup to run LLM type of hello world with LangChain.
- Code Repo: README.md
- Resources:
- Overview: Building simple web LLM chat interface interact with LLM running locally.
- Code Repo: README.md
- Resources:
- Overview: Building simple web LLM chat interface interact with LLM and RAG (Retrieval Augmented Generation) running locally. RAG finds the most relevant content or document you specified, and use those extra context for LLM to answer the initial query.
- Code Repo: README.md
- Resources: