Skip to content

Machine/deep learning papers that address the topic of privacy in visual data.

License

Notifications You must be signed in to change notification settings

brighter-ai/awesome-privacy-papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Privacy Papers for Visual Data Awesome Twitter Follow

The ongoing AI driven data analytics revolution requires an enormous amount of visual data. At the same time, it is a social responsibility to protect individuals privacy linked to these data.

We care deeply about privacy. To strengthen our knowledge in this field and understand how it relates to visual data, we do constant research in the latest scientific works about this topic.

We want to share with you a curated list of machine/deep learning papers that address the topic of privacy in visual data.

Contents

2022

  • DeepPrivacy2: Towards Realistic Full-Body Anonymization - H. Hukkelas, F. Lindseth - Authors show a anonymization framework (DeepPrivacy2) for realistic anonymization of human figures and faces. Is based on a style-based GAN that outputs high quality and editable anonymizations. A new dataset for human figure synthesis is introduced. [code]

2020

  • PE-MIU: A Training-Free Privacy-Enhancing Face Recognition Approach Based on Minimum Information Units - P. Terhörst, K. Riehl , N. Damer, P. Rot, B. Bortolato, F. Kirchbuchner, V. Struc, A. Kuijper - Authors improve on training-free privacy-preserving face recognition approach based on dividing a face template into several minimum information units (MIUs) blocks and randomly changing their position in the template. [code]
  • Unsupervised Enhancement of Soft-biometric Privacy with Negative Face Recognition - P. Terhörst, M. Huber, N. Damer, F. Kirchbuchner, A. Kuijper - Authors propose unsupervised privacy-preserving face recognition approach based on representing face templates in a complementary (negative) domain that describes facial properties that does not exist for this individual. [code]
  • Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models - S. Shan, E. Wenger, J. Zhang, H. Li, H. Zheng, B. Y. Zhao – Authors propose Fawkes, system that adds small perturbation to images ('cloaks'), which impairs identification systems effectiveness and protects users privacy against unauthorized facial recognition models. [code]
  • PrivacyNet: Semi-Adversarial Networks for Multi-attribute Face Privacy - V. Mirjalili, S. Raschka, A. Ross – PrivacyNet, using a GAN-based Semi-adversarial Network (SAN), modifies an input face image in way that only some selective attributes can be reliably classified by third-party biometric algorithms.
  • Adversarial Privacy-preserving Filter - J. Zhang, J. Sang, X. Zhao, X. Huang, Y. Sun, Y. Hu – This work proposes an adversarial privacy-preserving filter (APF) that adds an adversarial noise perturbation (not visible for humans) to a face image in order to impair unauthorized face recognition models. The protocol is introduced in the context of photo sharing services, where the user uploads face images to this service and the privacy-preserving filter is added in the cloud before the image is being shared.

2019

2018

  • Learning to Anonymize Faces for Privacy Preserving Action Detection - Z. Ren, Y. Jae Lee, M. S. Ryoo - Authors present a novel approach to learn a face anonymizer and activity detector using an adversarial learning formulation. The end result is a video anonymizer that performs pixel-level modifications to anonymize each person’s face, with minimal effect on action detection performance.
  • On Hallucinating Context and Background Pixels from a Face Mask using Multi-scale GANs - S. Banerjee, W. J. Scheirer, K. W. Bowyer, P. J. Flynn - Authors propose a multi-scale GAN model that can synthesize realistic context (forehead, hair, neck, clothes) and background pixels given a masked face input, without any user supervision. The generated face images can be used as stock images by the media without any privacy concerns.
  • Privacy-Protective-GAN for Face De-identification - Y. Wu, F. Yang, H. Ling - Authors present a generative approach, which integrates de-identification metric into the objective function to ensure privacy protection, while it retains utility and visual similarity with a regulator.

2017

2016

  • Privacy-CNH: A Framework to Detect Photo Privacy with Convolutional Neural Network using Hierarchical Features - L. Tram, D. Kong, H. Jin, J. Liu - Authors propose a new framework called Privacy-CNH that utilizes hierarchical features which include both object and convolutional features in a deep learning model to detect privacy at risk photos. The generation of object features enables model to better inform the users about the reason why a photo has privacy risk.
  • Privacy-Preserving Human Activity Recognition from Extreme Low Resolution - M. S. Ryoo, B. Rothrock, C. Fleming, H. J. Yang - Authors address human activity recognition while only using extreme low-resolution videos using an approach of inverse super resolution, with the intention of creating a computer vision system, that can recognize human activities and assist our daily life, yet ensure that it is not recording video, that may invade our privacy.
  • Faceless Person Recognition: Privacy Implications in Social Media - S. J. Oh, R. Benenson, M. Fritz, B. Schiele - Authors raise concern of privacy implications by analysing how well people are recognisable in social media data in different scenarios.
  • De-identification for Privacy Protection in Multimedia Content: A Survey - S. Ribaric, A. Ariyaeeinia, N. Pavesic - In an extensive survey, authors present de-identification of various non-biometric as well as behavioural and physiological identifiers. They announce some new directions in the de-identification research and point out the problems of detecting and removing social and environmental privacy sensitive context.

2015

  • Towards privacy-preserving recognition of human activities - J. Dai, B. Saghafi, J. Wu, J. Konrad, P. Ishwar - Authors studied and quantified the impact of camera resolution on action recognition accuracy in a simulated environment (Unity3D + Kinect v2). Results for a dataset of 12 individuals performing 4 actions indiate, somewhat surprisingly, that the recognition accuracy at single-pixel resolution can be quite close to that at 100 × 100 resolution, suggesting that reliable action recognition can be achieved without compromising occupant’s identity. [project page]
  • Attribute Preserved Face De-identification - A. Jourabloo, X. Yin, X. Liu - Authors propose an optimization-based method for face de-identification with the goal of changing the identity of a test image while preserving a large set of facial attributes. They combine the attribute classifiers and face verification classifier in a joint objective function.
  • The Privacy-Utility Tradeoff for Remotely Teleoperated Robots - D. J. Butler, J. Huang, F. Rosener, M. Cakmak - Authors explore the privacy-utility tradeoff for remotely teleoperated robots in the home with two surveys that provide a framework for understanding the privacy attitudes of end-users, and with a user study that empirically examines the effect of different filters of visual information on the ability of a teleoperator to carry out a task.
  • An Overview of Face De-identification in Still Images and Videos - S. Ribaric, N. Pavesic - Authors perform a survey of existing de-identification methods, outline the main issues and provide motivation for the research of such methods.
  • Controllable Face Privacy - T. Sim, L. Zhang - Authors apply a subspace decomposition onto face encoding scheme, effectively decoupling facial attributes such as gender, age, and identity into mutually orthogonal subspaces, which in turn enables independent control of these attributes. This approach protects identity privacy, and yet allows other computer vision analyses, such as gender detection, to proceed unimpeded.

2014

2009

  • Face De-Identification - R. Gross, L. Sweeney, J. Cohn, F. de la Torre, S. Baker - Authors introduce a novel de-identification framework using multi-factor models which unify linear, bilinear, and quadratic data models. They demonstrate that the algorithm protects privacy (as measured by face recognition performance) while preserving data utility (as measured by facial expression classification performance on de-identified images) and suggests that the new model extends directly to image sequences.

2006

  • Model-Based Face De-Identification - R. Gross, L. Sweeney, F. de la Torre, S. Baker - Authors improve on an algorithm for the protection of privacy in facial images, called k-Same-M algorithm, with focus on better image quality than its predecessors.
  • People Identification with Limited Labels in Privacy-Protected Video - Y. Chang, R. Yan, D. Chen, J. Yang - Authors explore two-step labeling process for video data that balance the insufficient training data and the people privacy protection: one set of labeled data is provided by authorized personnel from original video and the other set of imperfect pairwise constraints is labeled by unauthorized personnel from original video with masked face. The effectiveness of the proposed approach is demonstrated using video captured from a nursing home environment.
  • Blur Filtration Fails to Preserve Privacy for Home-Based Video Conferencing - C. Neustaedter, S. Greenberg, M. Boyle - Authors begin by reinterpreting the result of previous study (Boyle et al. 2000), stating that bluring the video does not balance privacy and awareness for risky situations, since people do not feel comfortable with relying on such techniques. Secondly, they outline a set of design implications that suggest strategies for balancing privacy and awareness.

2005

  • Preserving Privacy by De-identifying Facial Images - E. Newton, L. Sweeney, B. Malin - This paper presents a new privacy-preserving algorithm for face recognition, named k-Same, that determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels (k-Same-Pixel) or eigenvectors (k-Same-Eigen). Authors also conduct research on related works and real-world privacy implications of their approach.
  • Integrating Utility into Face De-Identification - R. Gross, E. Airoldi, B. Malin, L. Sweeney - Authors provide a comprehensive study on how various de-identification methods preserve face attributes / data utility and introduce a de-identification method, which was at the time superior to prior methods in both privacy protection and utility preservation.

2004

  • Blinkering Surveillance: Enabling Video Privacy through Computer Vision - A. W. Senior, S. Pankanti, A. Hampapur, L. M. Brown, Y. Tian, A. Ekin - Authors present a review of privacy topic in video surveillance and embody derived principles of privacy protection in a prototype system.
  • Robust Human Face Hiding Ensuring Privacy - I. Martinez-Ponte, X. Desurmont, J. Meessen, J-F. Delaigle - Authors present a face detection and tracking system with the intention of masking the face, thus making it unrecognizible. They do that by employing a lossy compression algorithm.

2003

  • Engineering Privacy in Public: Confounding Face Recognition - J. Alexander, J. M. Smith - The chief contribution of this paper is the empirical results of testing various face recognition countermeasures against the first of several systems - eigenfaces method, as well as a framework for interpreting those results.

2000

  • The Effects of Filtered Video on Awareness and Privacy - M. Boyle, C. Edwards, S. Greenberg - Authors provide a study on how bluring and pixelation impacts privacy and attribute awareness in a video. They examine the following attributes - the number of actors, their posture, gender, busyness, seriousness, approachability and background objects.

Contribute

Contributions welcome! Read the contribution guidelines first.

Releases

No releases published

Packages

No packages published