Finetuning Some Wizard Models With QLoRA
-
Updated
Sep 17, 2023 - Python
Finetuning Some Wizard Models With QLoRA
Streamlit application for Reddit posts powered by OpenAI, Pinecone and Langchain
A collection of examples for training or fine-tuning LLMs.
A winner of NeurIPS LLM 2023 Competition
Finetune an LLM to generate SQL from text on Intel GPUs (XPUs) using QLoRA
Natural Language Processing Class Project - Spring '23. Analysing and Generating Sports Fans Responses from Reddit Sport Subreddits
Factuality check of the SemRep Predications
This is a package for generating questions and answers from unstructured data to be used for NLP tasks.
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
A payload compression toolkit that makes it easy to create ideal data structures for LLMs; from training data to chain payloads.
My personal notes, code and projects of the Udacity Generative AI Nanodegree.
Fine-tune large language models (LLMs) using the Hugging Face Transformers library.
high-efficiency text & file scraper with smart tracking, client/server networking for building language model datasets fast
Gemma-2b-it LLM has been finetuned on a dataset of Python codes, enabling it to proficiently learn Python syntax and assist in debugging tasks, offering valuable guidance to programmers.
nter the realm of truth detection with GPT-Truth - fine-tuning GPT-3.5 for unparalleled accuracy in identifying deceptive opinions
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
This is a final porject repository for Goergia Tech CS7643.
npm like package ecosystem for Prompts 🤖
Collecting data for Building Lucknow's first LLM
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."