To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Frechet Inception Distance, and classification accuracy under the same privacy guarantee.
|Title of host publication||2021 IEEE International Workshop on Information Forensics and Security, WIFS 2021|
|State||Published - 2021|
|Event||2021 IEEE International Workshop on Information Forensics and Security, WIFS 2021 - Montpellier, France|
Duration: Dec 7 2021 → Dec 10 2021
|Name||2021 IEEE International Workshop on Information Forensics and Security, WIFS 2021|
|Conference||2021 IEEE International Workshop on Information Forensics and Security, WIFS 2021|
|Period||12/7/21 → 12/10/21|
Bibliographical noteFunding Information:
VI. ACKNOWLEDGEMENTS Research reported in this publication was supported by the National Institutes of Health, United States of America under award number R01MH121344-01 and the Child Family Endowed Professorship.
© 2021 IEEE.
- Generative adversarial networks
- differential privacy
- model inversion
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Information Systems and Management
- Safety, Risk, Reliability and Quality