Low-code framework for building custom LLMs, neural networks, and other AI models
-
Updated
Apr 25, 2024 - Python
Low-code framework for building custom LLMs, neural networks, and other AI models
本项目旨在分享大模型相关技术原理以及实战经验。
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Code examples and resources for DBRX, a large language model developed by Databricks
An efficient, flexible and full-featured toolkit for fine-tuning large models (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
DLRover: An Automatic Distributed Deep Learning System
irresponsible innovation. Try now at https://chat.dev/
The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)
LLM (Large Language Model) FineTuning
LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.
Tune LLM in few lines of code
Finetune LLMs on K8s by using Runbooks
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
Run GPU inference and training jobs on serverless infrastructure that scales with you.
Sequence Parallel Attention for Long Context LLM Model Training and Inference
Add a description, image, and links to the llm-training topic page so that developers can more easily learn about it.
To associate your repository with the llm-training topic, visit your repo's landing page and select "manage topics."