Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
-
Updated
Mar 6, 2024 - Python
Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts. Furthermore, M3DBench provides a new benchmark to assess large models across 3D vision-centric tasks.
Implementation of the models of the Universal-NER Paper 2024 using a Streamlit-based web application that is designed to process PDF documents for Named Entity Recognition tasks. It allows users to upload PDF files, from which the application extracts text, images, and tables to identify entities based on a user-specific user-specified entity type.
EasyRLHF aims to provide an easy and minimal interface to train aligned language models, using off-the-shelf solutions and datasets
Evaluating Large Language Models with Instructions and Prompts
KoTox is an automatically generated instruction dataset in Korean. The instruction set is used to mitigate the toxicity of the LLMs.
an instruction-tuning dataset generation script
Chinese Grammar Error and Spelling Error Correction System - 中文文法錯誤及錯別字校正系統
Random Noisy Embeddings with fine-tuning 방법론을 한국어 LLM에 간단히 적용할 수 있는 Kosy🍵llama
Instruction and training dataset generation using Mistral 7B with context from document chunks
A multimodal model for language-guided socially compliant robot navigation.
Vision Large Language Models trained on M3IT instruction tuning dataset
End-to-end MLOps LLM instruction finetuning based on PEFT & QLoRA to solve math problems.
Awesome Instruction Editing. Image and Media Editing with Human Instructions. Instruction-Guided Image and Media Editing.
The official implementation of paper "Demystifying Instruction Mixing for Fine-tuning Large Language Models"
Basline: google/flan-t5 Finetuning: LMQG , LoRA
Discourse chat data crawling and on-the-way parsing straight for LLM instruction finetuning. Data include texts, images and links ( Discourse论坛对话(图片,文本)数据爬取并解析,以直接用于(多模态)指令微调).
"RecRanker: Instruction Tuning Large Language Model as Ranker for Top-k Recommendation"
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."