Test tool to simulate two types of poisoning attack on AI model
-
Updated
May 5, 2024 - Python
Test tool to simulate two types of poisoning attack on AI model
FedAnil+ is a novel lightweight, and secure Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil+ written in Python.
FedAnil is a secure blockchain-enabled Federated Deep Learning Model to address non-IID data and privacy concerns. This repo hosts a simulation for FedAnil written in Python.
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Poisoning attack methods against adversarial training algorithms
Taller de Adversarial Machine Learning
M. Anisetti, C. A. Ardagna, A. Balestrucci, N. Bena, E. Damiani, C. Y. Yeun. "On the Robustness of Random Forest Against Data Poisoning: An Ensemble-Based Approach". In IEEE TSUSC, vol. 8 no. 4
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Hack tool for local network: Man in the middle, hosts scan, ARP poisoning, Router and DNS Poisoning
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Indirect Invisible Poisoning Attacks on Domain Adaptation
Tensorflow implementation of APT (Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR 2021)
Official Website of https://github.com/tamlhp/awesome-recsys-poisoning
Test tool to simulate defense from poisoning attack on AI model
Continuous Integration And Continuous Delivery Poisoning Guides
A Survey of Poisoning Attacks and Defenses in Recommender Systems
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
A Python library for Secure and Explainable Machine Learning
Add a description, image, and links to the poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the poisoning-attacks topic, visit your repo's landing page and select "manage topics."