Tools for easing the handoff between AI/ML and App/SRE teams.
-
Updated
May 31, 2024 - Go
Tools for easing the handoff between AI/ML and App/SRE teams.
Local webui for Large Language Models. Supports the GGUF format. Inference LLMs with support for STT/TTS and function calling.
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan
Gradio based tool to run opensource LLM models directly from Huggingface
PowerShell automation to download large language models (LLMs) from Git repositories and quantize them with llama.cpp into the GGUF format.
Search for anything using the Google, DuckDuckGo, phind.com. Also containes AI models, can transcribe yt videos, temporary email and phone number generation, have TTS support and webai(terminal gpt and open interpeter)
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma
Convert & quantize HuggingFace models using llama.cpp on premises
your go-to tool for easily creating quantized versions of Hugging Face models in the GGUF format.
A Genshin Impact Question Answer Project supported by Qwen1.5-14B-Chat
Add a description, image, and links to the gguf topic page so that developers can more easily learn about it.
To associate your repository with the gguf topic, visit your repo's landing page and select "manage topics."