浏览全部资源
扫码关注微信
1. 扬州大学信息工程学院,江苏 扬州 225127
2. 江苏省知识管理与智能服务工程研究中心,江苏 扬州 225127
3. 南京航空航天大学计算机科学与技术学院,江苏 南京 211106
[ "张佳乐(1994- ),男,安徽蚌埠人,博士,扬州大学讲师、硕士生导师,主要研究方向为人工智能安全、联邦学习、数据隐私保护等" ]
[ "朱诚诚(2000- ),男,安徽临泉人,扬州大学硕士生,主要研究方向为联邦学习安全与隐私保护" ]
[ "孙小兵(1985- ),男,江苏姜堰人,博士,扬州大学教授、博士生导师,主要研究方向为软件安全、人工智能安全、区块链安全等" ]
[ "陈兵(1970- ),男,江苏南通人,博士,南京航空航天大学教授、博士生导师,主要研究方向为无线网络、人工智能安全、网络空间安全、智能无人系统等" ]
网络出版日期:2023-05,
纸质出版日期:2023-05-25
移动端阅览
张佳乐, 朱诚诚, 孙小兵, 等. 基于GAN的联邦学习成员推理攻击与防御方法[J]. 通信学报, 2023,44(5):193-205.
Jiale ZHANG, Chengcheng ZHU, Xiaobing SUN, et al. Membership inference attack and defense method in federated learning based on GAN[J]. Journal on communications, 2023, 44(5): 193-205.
张佳乐, 朱诚诚, 孙小兵, 等. 基于GAN的联邦学习成员推理攻击与防御方法[J]. 通信学报, 2023,44(5):193-205. DOI: 10.11959/j.issn.1000-436x.2023094.
Jiale ZHANG, Chengcheng ZHU, Xiaobing SUN, et al. Membership inference attack and defense method in federated learning based on GAN[J]. Journal on communications, 2023, 44(5): 193-205. DOI: 10.11959/j.issn.1000-436x.2023094.
针对联邦学习系统极易遭受由恶意参与方在预测阶段发起的成员推理攻击行为,以及现有的防御方法在隐私保护和模型损失之间难以达到平衡的问题,探索了联邦学习中的成员推理攻击及其防御方法。首先提出2种基于生成对抗网络(GAN)的成员推理攻击方法:类级和用户级成员推理攻击,其中,类级成员推理攻击旨在泄露所有参与方的训练数据隐私,用户级成员推理攻击可以指定某一个特定的参与方;此外,进一步提出一种基于对抗样本的联邦学习成员推理防御方法(DefMIA),通过设计针对全局模型参数的对抗样本噪声添加方法,能够在保证联邦学习准确率的同时,有效防御成员推理攻击。实验结果表明,类级和用户级成员推理攻击可以在联邦学习中获得超过90%的攻击精度,而在使用DefMIA方法后,其攻击精度明显降低,接近于随机猜测(50%)。
Aiming at the problem that the federated learning system was extremely vulnerable to membership inference attacks initiated by malicious parties in the prediction stage
and the existing defense methods were difficult to achieve a balance between privacy protection and model loss.Membership inference attacks and their defense methods were explored in the context of federated learning.Firstly
two membership inference attack methods called class-level attack and user-level attack based on generative adversarial network (GAN) were proposed
where the former was aimed at leaking the training data privacy of all participants
while the latter could specify a specific participant.In addition
a membership inference defense method in federated learning based on adversarial sample (DefMIA) was further proposed
which could effectively defend against membership inference attacks by designing adversarial sample noise addition methods for global model parameters while ensuring the accuracy of federated learning.The experimental results show that class-level and user-level membership inference attack can achieve over 90% attack accuracy in federated learning
while after using the DefMIA method
their attack accuracy is significantly reduced
approaching random guessing (50%).
MCMAHAN H B , MOORE E , RAMAGE D , et al . Communication-efficient learning of deep networks from decentralized data [J ] . arXiv Preprint,arXiv:1602.05629 , 2016 .
LI T , SAHU A K , TALWALKAR A , et al . Federated learning:challenges,methods,and future directions [J ] . IEEE Signal Processing Magazine , 2020 , 37 ( 3 ): 50 - 60 .
YANG Q , LIU Y , CHEN T J , et al . Federated machine learning [J ] . ACM Transactions on Intelligent Systems and Technology , 2019 , 10 ( 2 ): 1 - 19 .
SATTLER F , WIEDEMANN S , MÜLLER K R , , et al . Robust and communication-efficient federated learning from non-i.i.d.data [J ] . IEEE Transactions on Neural Networks and Learning Systems , 2019 , 31 ( 9 ): 3400 - 3413 .
TRUEX S , LIU L , GURSOY M E , et al . Demystifying membership inference attacks in machine learning as a service [J ] . IEEE Transactions on Services Computing , 2021 , 14 ( 6 ): 2073 - 2089 .
WANG Z B , SONG M K , ZHANG Z F , et al . Beyond inferring class representatives:user-level privacy leakage from federated learning [C ] // Proceedings of IEEE Conference on Computer Communications . Piscataway:IEEE Press , 2019 : 2512 - 2520 .
MELIS L , SONG C Z , CRISTOFARO E D , et al . Exploiting unintended feature leakage in collaborative learning [C ] // Proceedings of 2019 IEEE Symposium on Security and Privacy (SP) . Piscataway:IEEE Press , 2019 : 691 - 706 .
ZHU L , LIU Z , HAN S . Deep leakage from gradients [J ] . arXiv Preprint,arXiv:1906.08935 , 2019 .
SHOKRI R , STRONATI M , SONG C Z , et al . Membership inference attacks against machine learning models [C ] // Proceedings of 2017 IEEE Symposium on Security and Privacy (SP) . Piscataway:IEEE Press , 2017 : 3 - 18 .
CHEN J L , ZHANG J L , ZHAO Y C , et al . Beyond model-level membership privacy leakage:an adversarial approach in federated learning [C ] // Proceedings of 2020 29th International Conference on Computer Communications and Networks (ICCCN) . Piscataway:IEEE Press , 2020 : 1 - 9 .
HAYES J , MELIS L , DANEZIS G , et al . LOGAN:membership inference attacks against generative models [C ] // Proceedings of Privacy Enhancing Technologies Symposium . Berlin:Springer , 2019 : 133 - 152 .
NASR M , SHOKRI R , HOUMANSADR A . Comprehensive privacy analysis of deep learning:passive and active white-box inference attacks against centralized and federated learning [C ] // Proceedings of 2019 IEEE Symposium on Security and Privacy (SP) . Piscataway:IEEE Press , 2019 : 739 - 753 .
GOODFELLOW I , POUGET-ABADIE J , MIRZA M , et al . Generative adversarial networks [J ] . Communications of the ACM , 2020 , 63 ( 11 ): 139 - 144 .
QU Y Y , YU S , ZHANG J W , et al . GAN-DP:generative adversarial net driven differentially privacy-preserving big data publishing [C ] // Proceedings of 2019 IEEE International Conference on Communications (ICC) . Piscataway:IEEE Press , 2019 : 1 - 6 .
JONSSON K V , KREITZ G , UDDIN M . Secure multi-party sorting and applications [J ] . IACR Cryptology ePrint Archive , 2011 :doi.eprint.iacr.org/2011/122.
AONO Y , HAYASHI T , WANG L , et al . Privacy-preserving deep learning via additively homomorphic encryption [J ] . IEEE Transactions on Information Forensics and Security , 2017 , 13 ( 5 ): 1333 - 1345 .
ABADI M , CHU A , GOODFELLOW I , et al . Deep learning with differential privacy [C ] // Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2016 : 308 - 318 .
JIA J Y , SALEM A , BACKES M , et al . MemGuard:defending against black-box membership inference attacks via adversarial examples [C ] // Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2019 : 259 - 274 .
ZHOU Y H , YE Q , LV J C . Communication-efficient federated learning with compensated overlap-FedAvg [J ] . IEEE Transactions on Parallel and Distributed Systems , 2022 , 33 ( 1 ): 192 - 205 .
FREDRIKSON M , JHA S , RISTENPART T . Model inversion attacks that exploit confidence information and basic countermeasures [C ] // Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2015 : 1322 - 1333 .
TOLPEGIN V , TRUEX S , GURSOY M E , et al . Data poisoning attacks against federated learning systems [C ] // European Symposium on Research in Computer Security . Berlin:Springer , 2020 : 480 - 501 .
ZHANG J L , CHEN J J , WU D , et al . Poisoning attack in federated learning using generative adversarial nets [C ] // Proceedings of 2019 18th IEEE International Conference on Trust,Security and Privacy In Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE) . Piscataway:IEEE Press , 2019 : 374 - 380 .
MOTHUKURI V , PARIZI R M , POURIYEH S , et al . A survey on security and privacy of federated learning [J ] . Future Generation Computer Systems , 2021 , 115 : 619 - 640 .
PROUDFOOT D . Anthropomorphism and AI:Turing’s much misunderstood imitation game [J ] . Artificial Intelligence , 2011 , 175 ( 5-6 ): 950 - 957 .
ZHANG J L , CHEN B , CHENG X , et al . PoisonGAN:generative poisoning attacks against federated learning in edge computing systems [J ] . IEEE Internet of Things Journal , 2021 , 8 ( 5 ): 3310 - 3322 .
BAGDASARYAN E , VEIT A , HUA Y , et al . How to backdoor federated learning [C ] // International Conference on Artificial Intelligence and Statistics . New York:PMLR , 2020 : 2938 - 2948 .
XU G W , LI H W , LIU S , et al . VerifyNet:secure and verifiable federated learning [J ] . IEEE Transactions on Information Forensics and Security , 2020 , 15 : 911 - 926 .
LU Y L , HUANG X H , DAI Y Y , et al . Blockchain and federated learning for privacy-preserved data sharing in industrial IoT [J ] . IEEE Transactions on Industrial Informatics , 2020 , 16 ( 6 ): 4177 - 4186 .
SALEM A , ZHANG Y , HUMBERT M , et al . ML-leaks:model and data independent membership inference attacks and defenses on machine learning models [C ] // Proceedings of 2019 Network and Distributed System Security Symposium . Reston:Internet Society , 2019 : 1 - 15 .
ZHANG J W , ZHANG J L , CHEN J J , et al . GAN enhanced membership inference:a passive local attack in federated learning [C ] // Proceedings of 2020 IEEE International Conference on Communications (ICC) . Piscataway:IEEE Press , 2020 : 1 - 6 .
HITAJ B , ATENIESE G , PEREZ-CRUZ F . Deep models under the GAN:information leakage from collaborative deep learning [C ] // Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security . New York:ACM Press , 2017 : 603 - 618 .
NGUYEN A , YOSINSKI J , CLUNE J . Deep neural networks are easily fooled:high confidence predictions for unrecognizable images [C ] // Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway:IEEE Press , 2015 : 427 - 436 .
DENG L . The MNIST database of handwritten digit images for machine learning research[best of the Web] [J ] . IEEE Signal Processing Magazine , 2012 , 29 ( 6 ): 141 - 142 .
XIAO H , RASUL K , VOLLGRAF R . Fashion-MNIST:a novel image dataset for benchmarking machine learning algorithms [J ] . arXiv Preprint,arXiv:1708.07747 , 2017 .
HE K M , ZHANG X Y , REN S Q , et al . Deep residual learning for image recognition [C ] // Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway:IEEE Press , 2016 : 770 - 778 .
WU D , QI S Y , QI Y , et al . Understanding and defending against White-box membership inference attack in deep learning [J ] . Knowledge-Based Systems , 2023 ,259:110014.
0
浏览量
381
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构