An easy-to-use Python framework to generate adversarial jailbreak prompts.
-
Updated
Apr 25, 2024 - Python
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Restore safety in fine-tuned language models through task arithmetic
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"
Add a description, image, and links to the llm-safety-benchmark topic page so that developers can more easily learn about it.
To associate your repository with the llm-safety-benchmark topic, visit your repo's landing page and select "manage topics."