Skip to content

ubertidavide/local_gpt

Repository files navigation

local_gpt

Ready to deploy Offline LLM AI web chat.

Usage

Instructions to install all the required software and settings.

  1. Install Torch from the official site, for more performance use the cuda version if you have the needed hardware.

  2. Clone this repository

    git clone --recurse-submodules https://github.com/ubertidavide/local_gpt.git
  1. Install all Python dependencies using pip:
    pip install -r requirements.txt
  1. Insert your preferred prompt at line 24 in the main.py code, you could found some awasome promts here

  2. Deploy the app locally:

    streamlit run main.py