Skip to content

The common setup to run LLM locally. Use llama-cpp to quantize model, Langchain for setup model, prompts, RAG, and Gradio for UI.

License

Notifications You must be signed in to change notification settings

skywing/llm-dev

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning Large Language Model (LLM) frameworks and API that run locally

The primary objective of this repo is to explore setting up Llama 2 to run locally and LLM development frameworks and libraries to provide a foundational runtime environment that can run on on Laptop for further more advance development.

Learning Steps and Tutorials

This repository is structured as a series of progressive learning steps and tutorials. Each step is designed to provide a simple approach of getting started or baseline development environment setup for further development with LLMs running locally. Below is an outline of the learning path:

About

The common setup to run LLM locally. Use llama-cpp to quantize model, Langchain for setup model, prompts, RAG, and Gradio for UI.

Topics

Resources

License

Stars

Watchers

Forks