Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Learning from gradients: Gradient inversion and its application using generative models in federated learning

Title
Learning from gradients: Gradient inversion and its application using generative models in federated learning
Authors
전진우
Date Issued
2022
Publisher
포항공과대학교
Abstract
The fundamental premise of federated learning is that the original data is kept secret by sending a gradient instead. Based on that premise, federated learning trains models on the server using clients’ gradients of their data to preserve privacy. In recent years, researchers have studied the possibility of privacy leakage by inverting the gradient and pointed out the vulnerability of federated learning. Previous attacks utilized domain-specific prior such as total variation or additional information such as BN statistics to enhance reconstruction quality. We introduce novel gradient inversion algorithms using a generative model as a prior. We demonstrate that the prior knowledge in the form of a generative model, pre-trained on the data distribution similar to those of the victim, brings a notable advantage to those who want to breach data privacy. Furthermore, we found that even when such a prior is not provided, one can learn a prior from a sequence of gradients sent from the victim in the training process. Our findings suggest that federated learning frameworks without additional privacy features, such as differential privacy, may be at risk.
URI
http://postech.dcollection.net/common/orgView/200000632656
https://oasis.postech.ac.kr/handle/2014.oak/117446
Article Type
Thesis
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse