Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
-
Updated
Feb 15, 2024 - Python
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
My personal notes, code and projects of the Udacity Generative AI Nanodegree.
Natural Language Processing Class Project - Spring '23. Analysing and Generating Sports Fans Responses from Reddit Sport Subreddits
Streamlit application for Reddit posts powered by OpenAI, Pinecone and Langchain
A payload compression toolkit that makes it easy to create ideal data structures for LLMs; from training data to chain payloads.
Fine-tune large language models (LLMs) using the Hugging Face Transformers library.
high-efficiency text & file scraper with smart tracking, client/server networking for building language model datasets fast
Gemma-2b-it LLM has been finetuned on a dataset of Python codes, enabling it to proficiently learn Python syntax and assist in debugging tasks, offering valuable guidance to programmers.
A collection of examples for training or fine-tuning LLMs.
Collection of resources for finetuning Large Language Models (LLMs).
Finetune an LLM to generate SQL from text on Intel GPUs (XPUs) using QLoRA
Factuality check of the SemRep Predications
nter the realm of truth detection with GPT-Truth - fine-tuning GPT-3.5 for unparalleled accuracy in identifying deceptive opinions
This is a final porject repository for Goergia Tech CS7643.
A winner of NeurIPS LLM 2023 Competition
This is a package for generating questions and answers from unstructured data to be used for NLP tasks.
Finetuning Some Wizard Models With QLoRA
npm like package ecosystem for Prompts 🤖
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."