Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
Apr 26, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
stable diffusion webui colab
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
Using Low-rank adaptation to quickly fine-tune diffusion models.
Firefly: 大模型训练工具,支持训练Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
MQTT gateway for ESP8266 or ESP32 with bidirectional 433mhz/315mhz/868mhz, Infrared communications, BLE, Bluetooth, beacons detection, mi flora, mi jia, LYWSD02, LYWSD03MMC, Mi Scale, TPMS, BBQ thermometer compatibility & LoRa.
33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
Meshtastic device firmware
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
Add a description, image, and links to the lora topic page so that developers can more easily learn about it.
To associate your repository with the lora topic, visit your repo's landing page and select "manage topics."