Open Access System for Information Sharing

Login Library

 

Article
Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization SCIE SCOPUS

Title
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization
Authors
Wei, KangLi, JunDing, MingMa, ChuanJEON, YO-SEBVincent Poor, H.
Date Issued
2023-05
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Abstract
Federated learning (FL), as a type of distributed machine learning, is vulnerable to external attacks during parameter transmissions between learning agents and a model aggregator. In particular, malicious participant clients in FL can purposefully craft their uploaded model parameters to manipulate system outputs, which is know as a model poisoning (MP) attack. In this paper, we propose effective MP algorithms to attack the classical defensive aggregation Krum at the aggregator. The proposed algorithms are designed to evade detection, i.e., covert MP (CMP). Specifically, we first formulate the MP as an optimization problem by minimizing the Euclidean distance between the manipulated model and designated one, constrained by Krum. Then, we develop CMP algorithms against the Krum based on the solutions of this optimization problem. Furthermore, to reduce the optimization complexity, we propose low complexity CMP algorithms having only a slight performance degradation. Our experimental results demonstrate that the proposed CMP algorithms are effective and can substantially outperform existing attack mechanisms, such as Arjun's attack and the label flipping attack. More specifically, our original CMP can achieve a high rate of the attacker's accuracy (). For example, in our experiments using the MNIST dataset, the proposed CMP attacking algorithm against Krum can successfully manipulate the aggregated model to incorrectly classify a given digit as a different one (e.g., 9 as 8). Meanwhile, our CMP algorithm with an approximated constraint can achieve a rate of 87% in terms of the attacker's accuracy (attacker-desired results), with a 73% complexity reduction compared to the original CMP. IEEE
URI
https://oasis.postech.ac.kr/handle/2014.oak/118503
DOI
10.1109/tdsc.2023.3274119
ISSN
1545-5971
Article Type
Article
Citation
IEEE Transactions on Dependable and Secure Computing, page. 1 - 14, 2023-05
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Views & Downloads

Browse