Skip to content

This project aims to provide a solution for quantitatively estimating the damage caused by "hallucinations" in large language models. It implements an insurance mechanism that afford redress for the damage caused by hallucinations, helping users to address this phenomenon effectively.

License

GPT-Protecter/GPT-Protecter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

🌟GPT-Protecter

🌟 Description

This project aims to provide a solution for quantitatively estimating the damage caused by "hallucinations" in large language models. It implements an insurance mechanism that afford redress for the damage caused by hallucinations, helping users to address this phenomenon effectively.

💡 Features

  • Quantitative estimation of hallucination-induced damage
  • Compensation for hallucination-induced damage
  • Easy integration with large language models
  • User-friendly interface for managing insurance claims

🚀 Quick Start

Please follow the instructions here to use it.

Step 1: Call interface (https://mock.apifox.com/m1/3993944-0-default/get_public_key) to obtain the public key

Step 2: Perform RSA calculation based on public key and take a significant bit every 2 bits. Don't worry about data security, as no one can access your real content after doing so, so your data is secure.

Step 3: Use the result of the second step as the body of HTTP to call interface ( https://mock.apifox.com/m1/3993944-0-default/purchase_insurance_for_content) to insure the content generated by GPT

Step 3: Call interface (https://mock.apifox.com/m1/3993944-0-default/claimant) to claim compensation

💡 How does the Insurance Mechanism Work?

Our insurance mechanism is designed to identify and measure the impact of hallucinations in language models. It uses advanced algorithms to analyze the output of the models, detect hallucinations, and estimate their potential damage.

🤝 Why Contribute to this Project?

By contributing to this project, you can help improve the reliability of large language models and make a positive impact in addressing the hallucinations phenomenon. Your contributions can help develop more effective insurance mechanisms and compensation systems.

License

This project is licensed under the MIT License. Please see the license file for more information.

Contact Information

For any questions or feedback, please contact us at GPT-Protecter@outlook.com.

About

This project aims to provide a solution for quantitatively estimating the damage caused by "hallucinations" in large language models. It implements an insurance mechanism that afford redress for the damage caused by hallucinations, helping users to address this phenomenon effectively.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published