Here are
64 public repositories
matching this topic...
llama-cpp-python(llama.cpp)で実行するGGUF形式のLLM用の簡易Webインタフェースです。
Updated
May 12, 2024
Python
solo connector core built on ctransformers
chat Generative Pre-trained Transformer model selector
Updated
Feb 1, 2024
Python
a simple way to interact llama with gguf
Updated
May 20, 2024
Python
Updated
Feb 29, 2024
Python
Updated
Feb 28, 2024
Python
Finetuning the Mistral7b on custom dataset
Updated
Jan 31, 2024
Python
Convert & quantize HuggingFace models using llama.cpp on premises
Updated
May 25, 2024
Jupyter Notebook
fine-tune LLaMa3 on alpaca instruct Gujarati dataset
Updated
May 29, 2024
Jupyter Notebook
Chat Generative Pre-trained Transformer
Updated
Jan 17, 2024
Python
A Genshin Impact Question Answer Project supported by Qwen1.5-14B-Chat
Updated
May 23, 2024
Python
solo connector core built on llama.cpp
Updated
Apr 21, 2024
Python
GGUF(a kind of ML checkpoint format) file parse/estimate.
Cost-effective AI Microservice System
Updated
Dec 23, 2023
Python
Go manage your Ollama models
Terminal UI for locally hosted large language models
Kaalaman AI is an open source web app that use GPT-Generated Unified Format (GGUF)
Updated
Apr 4, 2024
TypeScript
Everything you need to run RAG locally without OpenAI or any other paid services. Completely opensource techstack.
Updated
Mar 22, 2024
Python
Improve this page
Add a description, image, and links to the
gguf
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
gguf
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.