Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
-
Updated
Aug 25, 2023 - Python
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
[SIGIR'24] The official implementation code of MOELoRA.
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
Advanced AI-driven tool for generating unique video game characters using Stable Diffusion, DreamBooth, and LoRa adaptations. Enhances creativity with customizable, high-quality character designs, tailored specifically for game developers and artists.
Stability AI SD-Turbo model fine-tuned using LoRA on Magic The Gathering artwork
Low Rank Approximation (Adaptation) Methods in Neural Networks
Add a description, image, and links to the low-rank-adaptation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-adaptation topic, visit your repo's landing page and select "manage topics."