Skip to content

Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp

Notifications You must be signed in to change notification settings

mad-cat-lon/visuallama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

visuallama

A simple web interface for running llama.cpp models locally, built using:

This is a work in progress and is being continually updated! I am not an expert in the MERN stack and web dev, so code and UX design may not always be of the highest quality!

Upcoming features

  • Unit tests
  • Might change backend from Express.js to Flask (this is my first time using Javascript and I'm more comfortable with Python)
  • Easier setup and installation (paths to models and settings are currently hardcoded)
  • Configuring model parameters like prompts, temperature etc. from web UI
  • Storing model settings, chat history, logs etc. in MongoDB
  • Cleaner UI
  • Support for multiple users

Example

example.mp4

Tiny usage example (sped up slightly because my CPU is pretty slow, even when running running the smallest LLaMa model with 4-bit quantization)

About

Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published