👮 Simulate various public and private security scenarios.
-
Updated
May 12, 2024
👮 Simulate various public and private security scenarios.
MSc Dissertation: Ensemble neural network for static malware classification using multiple representations
IDVoice + ChatGPT Android demo app
CLI tool that uses the Lakera API to perform security checks in LLM inputs
AntiNex python client for training and using pre-trained deep neural networks with JWT authentication
Prompt Engineering Tool for AI Models with cli prompt or api usage
Building Private Healthcare AI Assistant for Clinics Using Qdrant Hybrid Cloud, DSPy and Groq - Llama3
Python SDK for IvyCheck
A centralized resource for technical professionals looking to establish a strategy for implementing security and responsible AI practices on Azure
GeminiHacker is a Python script designed to harness the power of a generative AI model for security research, bug bounty hunting, and vulnerability scanning. This README.md file provides detailed instructions on how to install, configure, and use the script effectively.
Official code for paper: Z. Zhang, X. Wang, J. Huang and S. Zhang, "Analysis and Utilization of Hidden Information in Model Inversion Attacks," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2023.3295942
IDVoice + ChatGPT iOS demo app
Datasets for training deep neural networks to defend software applications
Cyber-Security Bible! Theory and Tools, Kali Linux, Penetration Testing, Bug Bounty, CTFs, Malware Analysis, Cryptography, Secure Programming, Web App Security, Cloud Security, Devsecops, Ethical Hacking, Social Engineering, Privacy, Incident Response, Threat Assestment, Personal Security, Ai Security, Android Security, Iot Security, Standards.
Official Implementation of IEEE TIFS paper Odyssey: Creation, Analysis and Detection of Trojan Models
AI/LLM Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts that can be used to evaluate the behavior of AI or LLM systems when exposed to different types of inputs.
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
Neural networks, but malefic! 😈
Add a description, image, and links to the ai-security topic page so that developers can more easily learn about it.
To associate your repository with the ai-security topic, visit your repo's landing page and select "manage topics."