The Security Toolkit for LLM Interactions
-
Updated
May 9, 2024 - Python
The Security Toolkit for LLM Interactions
Agentic LLM Vulnerability Scanner
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
LLM App templates for RAG, knowledge mining, and stream analytics. Ready to run with Docker,⚡in sync with your data sources.
Risks and targets for assessing LLMs & LLM vulnerabilities
A benchmark for prompt injection detection systems.
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
🐢 Open-Source Evaluation & Testing framework for LLMs and ML models
Papers and resources related to the security and privacy of LLMs 🤖
SecGPT: An execution isolation architecture for LLM-based systems
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Whispers in the Machine: Confidentiality in LLM-integrated Systems
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
This repository contains various attack against Large Language Models.
LLM security and privacy
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
Example of running last_layer with FastAPI on vercel
It is a comprehensive resource hub compiling all LLM papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."