Stability AI SD-Turbo model fine-tuned using LoRA on Magic The Gathering artwork
-
Updated
Jan 18, 2024 - Python
Stability AI SD-Turbo model fine-tuned using LoRA on Magic The Gathering artwork
Low Rank Approximation (Adaptation) Methods in Neural Networks
Advanced AI-driven tool for generating unique video game characters using Stable Diffusion, DreamBooth, and LoRa adaptations. Enhances creativity with customizable, high-quality character designs, tailored specifically for game developers and artists.
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
[SIGIR'24] The official implementation code of MOELoRA.
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
Add a description, image, and links to the low-rank-adaptation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-adaptation topic, visit your repo's landing page and select "manage topics."