Skip to content
View LukasStruppek's full-sized avatar

Highlights

  • Pro
Block or Report

Block or report LukasStruppek

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
LukasStruppek/README.md

Hi there 👋

I am a Ph.D. student at the Artificial Intelligence and Machine Learning Lab at TU Darmstadt. My research interests lie in the privacy and security of artificial intelligence (AI) and deep learning systems.

As AI becomes more widespread and is used in critical areas such as autonomous driving, medical applications, or the financial sector, the security of the model and the privacy of training data play a crucial role. In my work, I study various attacks on machine learning models to understand and mitigate the resulting threats to safety and privacy.

I usally provide all source code required to reproduce research and build upon it. If you have any questions or stumble over any problem, feel free to reach out to me or open an issue on Github.

Pinned

  1. Rickrolling-the-Artist Rickrolling-the-Artist Public

    [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".

    Python 50 7

  2. Exploiting-Cultural-Biases-via-Homoglyphs Exploiting-Cultural-Biases-via-Homoglyphs Public

    [Journal of Artificial Intelligence Research] Source code for our paper "Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis".

    Python 10 1

  3. Plug-and-Play-Attacks Plug-and-Play-Attacks Public

    [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".

    Jupyter Notebook 31 7

  4. ml-research/Learning-to-Break-Deep-Perceptual-Hashing ml-research/Learning-to-Break-Deep-Perceptual-Hashing Public

    [FAccT 2022] Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash".

    Python 21 5

  5. Class_Attribute_Inference_Attacks Class_Attribute_Inference_Attacks Public

    Source code for our paper "Image Classifiers Leak Sensitive Attributes About Their Classes".

    Python 4

  6. Robust_Training_on_Poisoned_Samples Robust_Training_on_Poisoned_Samples Public

    Source code for our paper "Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data" (NeurIPS 2023 Workshop).

    Python 2