Skip to content
View zhangzp9970's full-sized avatar
🇨🇳
🇨🇳

Highlights

  • Pro
Block or Report

Block or report zhangzp9970

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
zhangzp9970/README.md

Hi there 👋, I'm Zeping Zhang!

  • 🔭 I’m currently working on AI security, particularly Model Inversion Attacks (MIA)
  • 🌱 I’m currently persuing my Ph.D. degree on Cyber Science and Engineering in Southeast University, China.
  • 📫 How to reach me: zhangzp9970@outlook.com
  • 👯 I’m looking to collaborate on torchplus and making differences!
  • 😄 Pronouns: zzp
  • ✒️ Motto: Independent of Sprit, Free of Mind. (独立之精神,自由之思想。)

Papers📃

  • Z. Zhang, X. Wang, J. Huang, and S. Zhang, “Analysis and Utilization of Hidden Information in Model Inversion Attacks,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 4449–4462, 2023, doi: 10.1109/TIFS.2023.3295942. [Code]
  • S. Zhang, J. Huang, Z. Zhang, and C. Qi, “Compromise Privacy in Large-Batch Federated Learning via Malicious Model Parameters,” in Algorithms and Architectures for Parallel Processing, W. Meng, R. Lu, G. Min, and J. Vaidya, Eds., in Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2023, pp. 63–80. doi: 10.1007/978-3-031-22677-9_4. [Code]
  • S. Zhang, J. Huang, Z. Zhang, P. Li, and C. Qi, “Compromise privacy in large-batch Federated Learning via model poisoning,” Information Sciences, vol. 647, p. 119421, Nov. 2023, doi: 10.1016/j.ins.2023.119421. [Code]
  • C. Liang, J. Huang, Z. Zhang, and S. Zhang, “Defending against model extraction attacks with OOD feature learning and decision boundary confusion,” Computers & Security, vol. 136, p. 103563, Jan. 2024, doi: 10.1016/j.cose.2023.103563.
  • P. Li, J. Huang, H. Wu, Z. Zhang, and C. Qi, “SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning,” Neural Networks, p. 106199, Feb. 2024, doi: 10.1016/j.neunet.2024.106199.

Pinned

  1. Amplified-MIA Amplified-MIA Public

    Official code for paper: Z. Zhang, X. Wang, J. Huang and S. Zhang, "Analysis and Utilization of Hidden Information in Model Inversion Attacks," in IEEE Transactions on Information Forensics and Sec…

    Python 2

  2. MIA MIA Public

    Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures

    Python 21 5

  3. torchplus torchplus Public

    torchplus is a utilities library that extends pytorch and torchvision

    Python 2

  4. CNS CNS Public

    Crypto System designed for cryptographic class in southeast university

    C++ 1

  5. VisioNeuralNet VisioNeuralNet Public

    Accelerate your research by easily plotting beautiful neural networks in Microsoft Visio!

  6. andOTP/andOTP andOTP/andOTP Public archive

    [Unmaintained] Open source two-factor authentication for Android

    Java 3.7k 358