Distributed Deep Learning (DDL) allows disparate sites or entities to use their local data to collaboratively learn a model at a central server. To protect data privacy, existing approaches like fully homomorphic encryption and differential privacy are either computationally prohibitive or insecure. In this paper, we proposed applying a privacy-preserving transformation (PPT) before sending the transformed data to the server. The design goals of PPT include computation efficiency, privacy preservation, and good learnability at the server with maximal reuse of DL software infrastructure. After analyzing the security model and possible attacks, we evaluated simple PPTs including scrambling, random linear transforms, and Advanced Encryption Standard (AES). While AES is more secure than the others, it significantly degrades the learning performance. To address this challenge, we proposed a novel random deep neural network as PPT. Our experiments showed that the random weights and connections provide adequate security and good learning performances at the server.
|Title of host publication||10th IEEE International Workshop on Information Forensics and Security, WIFS 2018|
|State||Published - Jul 2 2018|
|Event||10th IEEE International Workshop on Information Forensics and Security, WIFS 2018 - Hong Kong, Hong Kong|
Duration: Dec 10 2018 → Dec 13 2018
|Name||10th IEEE International Workshop on Information Forensics and Security, WIFS 2018|
|Conference||10th IEEE International Workshop on Information Forensics and Security, WIFS 2018|
|Period||12/10/18 → 12/13/18|
Bibliographical notePublisher Copyright:
© 2018 IEEE.
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems and Management
- Safety, Risk, Reliability and Quality