Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
May 23, 2024 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
🐢 Open-Source Evaluation & Testing for LLMs and ML models
An Easy-to-use Knowledge Editing Framework for LLMs.
The open-sourced Python toolbox for backdoor attacks and defenses.
Neural Network Verification Software Tool
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
A toolkit for tools and techniques related to the privacy and compliance of AI models.
🚀 A fast safe reinforcement learning library in PyTorch
Official code repo for the O'Reilly Book - Machine Learning for High-Risk Applications
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
A project to add scalable state-of-the-art out-of-distribution detection (open set recognition) support by changing two lines of code! Perform efficient inferences (i.e., do not increase inference time) and detection without classification accuracy drop, hyperparameter tuning, or collecting additional data.
Framework for Adversarial Malware Evaluation.
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
[TPAMI, 2023] Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI and Human-Centered AI.
Principal Image Sections Mapping. Convolutional Neural Network Visualisation and Explanation Framework
SyReNN: Symbolic Representations for Neural Networks
[ACM MM22] Towards Robust Video Object Segmentation with Adaptive Object Calibration, ACM Multimedia 2022
MERLIN is a global, model-agnostic, contrastive explainer for any tabular or text classifier. It provides contrastive explanations of how the behaviour of two machine learning models differs.
Add a description, image, and links to the trustworthy-ai topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-ai topic, visit your repo's landing page and select "manage topics."