Building Private Healthcare AI Assistant for Clinics Using Qdrant Hybrid Cloud, DSPy and Groq - Llama3
-
Updated
May 22, 2024 - Jupyter Notebook
Building Private Healthcare AI Assistant for Clinics Using Qdrant Hybrid Cloud, DSPy and Groq - Llama3
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Discover and inventory the SaaS applications used across your organization by intelligently analyzing incoming Gmail emails, providing valuable insights into your SaaS landscape.
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
A list of backdoor learning resources
This repository is primarily maintained by Omar Santos (@santosomar) and includes thousands of resources related to ethical hacking, bug bounties, digital forensics and incident response (DFIR), artificial intelligence security, vulnerability research, exploit development, reverse engineering, and more.
A curated list of useful resources that cover Offensive AI.
👮 Simulate various public and private security scenarios.
安全手册,企业安全实践、攻防与安全研究知识库
A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.
A curated list of academic events on AI Security & Privacy
GPT 2 model trained on fake PII to study PII leakage from large language models
RuLES: a benchmark for evaluating rule-following in language models
ATLAS tactics, techniques, and case studies data
Do you want to learn AI Security but don't know where to start ? Take a look at this map.
Performing website vulnerability scanning using OpenAI technologie
Python SDK for IvyCheck
[IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the victim model's prediction for arbitrary targets.
An intentionally vulnerable AI chatbot to learn and practice AI Security.
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
Add a description, image, and links to the ai-security topic page so that developers can more easily learn about it.
To associate your repository with the ai-security topic, visit your repo's landing page and select "manage topics."