Loki: Open-source solution designed to automate the process of verifying factuality
-
Updated
May 22, 2024 - Python
Loki: Open-source solution designed to automate the process of verifying factuality
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
[ACL 2024] Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
😎 up-to-date & curated list of awesome LMM hallucinations papers, methods & resources.
CVPR2018 Face Super-resolution with supplementary Attributes
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
Knowledge Verification to Nip Hallucination in the Bud
An explainable sentence similarity measurement
[NLPCC 2024] Shared Task 10: Regulating Large Language Models
Add a description, image, and links to the hallucination topic page so that developers can more easily learn about it.
To associate your repository with the hallucination topic, visit your repo's landing page and select "manage topics."