Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
-
Updated
May 10, 2024 - C++
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Hot topics and good research groups. Paper summary
A Python library for Secure and Explainable Machine Learning
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
A Survey of Poisoning Attacks and Defenses in Recommender Systems
Continuous Integration And Continuous Delivery Poisoning Guides
Test tool to simulate two types of poisoning attack on AI model
FedAnil+ is a novel lightweight, and secure Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil+ written in Python.
FedAnil is a secure blockchain-enabled Federated Deep Learning Model to address non-IID data and privacy concerns. This repo hosts a simulation for FedAnil written in Python.
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Poisoning attack methods against adversarial training algorithms
Taller de Adversarial Machine Learning
M. Anisetti, C. A. Ardagna, A. Balestrucci, N. Bena, E. Damiani, C. Y. Yeun. "On the Robustness of Random Forest Against Data Poisoning: An Ensemble-Based Approach". In IEEE TSUSC, vol. 8 no. 4
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Hack tool for local network: Man in the middle, hosts scan, ARP poisoning, Router and DNS Poisoning
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Indirect Invisible Poisoning Attacks on Domain Adaptation
Tensorflow implementation of APT (Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR 2021)
Add a description, image, and links to the poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the poisoning-attacks topic, visit your repo's landing page and select "manage topics."