浏览全部资源
扫码关注微信
1.北京邮电大学网络空间安全学院,北京 100088
2.北京信息科技大学计算机学院,北京 100192
[ "赵晓洁(1997- ),女,山东菏泽人,北京邮电大学博士生,主要研究方向为联邦学习、攻防算法、隐私计算、人工智能安全等。" ]
[ "时金桥(1978- ),男,黑龙江哈尔滨人,博士,北京邮电大学教授,主要研究方向为隐私保护、人工智能安全、匿名通信技术等。" ]
[ "黄梅(1997- ),女,广西桂平人,北京邮电大学博士生,主要研究方向为加密流量分类、联邦学习等。" ]
[ "柯镇涵(2000- ),男,湖北十堰人,北京邮电大学硕士生,主要研究方向为安全多方计算、联邦学习等。" ]
[ "申立艳(1992- ),女,河北保定人,博士,北京信息科技大学副教授,主要研究方向为人工智能安全、隐私计算、安全多方计算、联邦学习等。" ]
收稿日期:2024-05-27,
修回日期:2024-11-15,
纸质出版日期:2024-12-25
移动端阅览
赵晓洁,时金桥,黄梅等.联邦学习中的拜占庭攻防研究综述[J].通信学报,2024,45(12):197-215.
ZHAO Xiaojie,SHI Jinqiao,HUANG Mei,et al.Survey on Byzantine attacks and defenses in federated learning[J].Journal on Communications,2024,45(12):197-215.
赵晓洁,时金桥,黄梅等.联邦学习中的拜占庭攻防研究综述[J].通信学报,2024,45(12):197-215. DOI: 10.11959/j.issn.1000-436x.2024208.
ZHAO Xiaojie,SHI Jinqiao,HUANG Mei,et al.Survey on Byzantine attacks and defenses in federated learning[J].Journal on Communications,2024,45(12):197-215. DOI: 10.11959/j.issn.1000-436x.2024208.
联邦学习作为新兴的分布式机器学习解决了数据孤岛问题。然而,由于大规模、分布式特性以及本地客户端的强自主性,使得联邦学习极易遭受拜占庭攻击且攻击不易发现,这严重破坏了模型的完整性和可用性等。首先,以拜占庭攻击为研究对象,对攻击原理进行细化分类与剖析。其次,以经典的网络安全防御模型为指导,从防御机制的角度针对联邦学习防御方法进行分类和分析。最后,提出了联邦学习抗拜占庭攻击需要解决的关键问题和研究挑战,为未来相关研究者提供了新的参考。
Federated learning as an emerging distributed machine learning
can solve the problem of data islands. However
due to the large-scale
distributed nature and strong autonomy of local clients
federated learning is extremely vulnerable to Byzantine attacks and the attacks are not easy to detect
which seriously damages the integrity and availability of the model. First
taking Byzantine attacks as the research object
a detailed classification and analysis of the attack principles were conducted. Secondly
guided by the classic network security defense model
federated learning defense methods were classified and analyzed from the perspective of defense mechanisms. Finally
the key issues and research challenges that need to be solved in federated learning to resist Byzantine attacks were proposed
providing new references for future relevant researchers.
KONEČNÝ J , MCMAHAN H B , RAMAGE D , et al . Federated optimization: distributed machine learning for on-device intelligence [J ] . arXiv Preprint , arXiv: 1610.02527 , 2016 .
HARD A , RAO K , MATHEWS R , et al . Federated learning for mobile keyboard prediction [J ] . arXiv Preprint , arXiv: 1811.03604 , 2018 .
YANG T , ANDREW G , EICHNER H , et al . Applied federated learning: improving google keyboard query suggestions [J ] . arXiv Preprint , arXiv: 1812.02903 , 2018 .
ANTUNES R S , ANDRÉ DA COSTA C , KÜDERLE A , et al . Federated learning for healthcare: systematic review and architecture proposal [J ] . ACM Transactions on Intelligent Systems and Technology , 2022 , 13 ( 4 ): 1 - 23 .
NIKNAM S , DHILLON H S , REED J H . Federated learning for wireless communications: motivation, opportunities, and challenges [J ] . IEEE Communications Magazine , 2020 , 58 ( 6 ): 46 - 51 .
YANG Z H , CHEN M Z , WONG K K , et al . Federated learning for 6G: applications, challenges, and opportunities [J ] . Engineering , 2022 , 8 : 33 - 41 .
SHI J B , ZHAO H J , WANG M Y , et al . Signal recognition based on federated learning [C ] // Proceedings of the IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) . Piscataway : IEEE Press , 2020 : 1105 - 1110 .
PREUVENEERS D , RIMMER V , TSINGENOPOULOS I , et al . Chained anomaly detection models for federated learning: an intrusion detection case study [J ] . Applied Sciences , 2018 , 8 ( 12 ): 2663 .
KHRAMTSOVA E , HAMMERSCHMIDT C , LAGRAA S , et al . Federated learning for cyber security: SOC collaboration for malicious URL detection [C ] // Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS) . Piscataway : IEEE Press , 2020 : 1316 - 1321 .
LIU Y , FAN T , CHEN T J , et al . FATE: an industrial grade platform for collaborative learning with data protection [J ] . Journal of Machine Learning Research , 2021 , 22 ( 226 ): 1 - 6 .
RYFFEL T , TRASK A , DAHL M , et al . A generic framework for privacy preserving deep learning [J ] . arXiv Preprint , arXiv: 1811.04017 , 2018 .
马艳军 , 于佃海 , 吴甜 , 等 . 飞桨: 源于产业实践的开源深度学习平台 [J ] . 数据与计算发展前沿 , 2019 , 1 ( 1 ): 105 - 115 .
MA Y J , YU D H , WU T , et al . PaddlePaddle: an open-source deep learning platform from industrial practice [J ] . Frontiers of Data and Computing , 2019 , 1 ( 1 ): 105 - 115 .
BONAWITZ K , EICHNER H , GRIESKAMP W , et al . Towards federated learning at scale: system design [J ] . arXiv Preprint , arXiv: 1902.01046 , 2019 .
NGUYEN T D , RIEGER P , VITI R D , et al . {FLAME}: Taming backdoors in federated learning [C ] // Proceedings of the 31st USENIX Security Symposium . Berkeley : USENIX Association , 2022 : 1415 - 1432 .
BARUCH M , BARUCH G , GOLDBERG Y . A little is enough: circumventing defenses for distributed learning [C ] // Proceedings of the 33rd International Conference on Neural Information . Processing Systems . New York : ACM Press , 2019 : 8635 - 8645 .
TOLPEGIN V , TRUEX S , GURSOY M E , et al . Data poisoning attacks against federated learning systems [C ] // Proceedings of the 25th European Symposium on Research in Computer Security . Berlin : Springer , 2020 : 480 - 501 .
WU C H , WU F Z , QI T , et al . FedAttack: effective and covert poisoning attack on federated recommendation via hard sampling [C ] // Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining . New York : ACM Press , 2022 : 4164 - 4172 .
FANG M H , CAO X Y , JIA J Y , et al . Local model poisoning attacks to Byzantine-robust federated learning [C ] // Proceedings of the 29th USENIX security symposium (USENIX Security 20) . Berkeley : USENIX Association , 2020 : 1605 - 1622 .
SHEJWALKAR V , HOUMANSADR A . Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning [C ] // Proceeding of the 2021 Network and Distributed System Security Symposium . Reston : Internet Society , 2021 : 1 - 18 .
ZHANG S J , YIN H Z , CHEN T , et al . PipAttack: poisoning federated recommender systems for manipulating item promotion [C ] // Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining . New York : ACM Press , 2022 : 1415 - 1423 .
RONG D Z , HE Q M , CHEN J H . Poisoning deep learning based recommender model in federated learning scenarios [J ] . arXiv Preprint , arXiv: 2204.13594 , 2022 .
FUNG C , YOON C J , BESCHASTNIKH I . The limitations of federated learning in sybil settings [C ] // Proceedings of the 23rd International Symposium on Research in Attacks, Intrusions and Defenses . Berkeley : USENIX Association , 2020 : 301 - 316 .
YU Y , LIU Q , WU L K , et al . Untargeted attack against federated recommendation systems via poisonous item embeddings and the defense [C ] // Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence . New York : ACM Press , 2023 : 4854 - 4863 .
BAGDASARYAN E , VEIT A , HUA Y , et al . How to backdoor federated learnings [J ] . arXiv Preprint , arXiv: 1807.00459 , 2018 .
WANG H Y , SREENIVASAN K , RAJPUT S , et al . Attack of the tails: yes, you really can backdoor federated learning [J ] . arXiv Preprint , arXiv: 2007.05084 , 2020 .
XIE C , KOYEJO O , GUPTA I . Fall of empires: Breaking byzantine-tolerant sgd by inner product manipulation [J ] . arXiv Preprint , arXiv: 1903.03936 , 2019 .
陈晓霖 , 昝道广 , 吴炳潮 , 等 . 面向纵向联邦学习的对抗样本生成算法 [J ] . 通信学报 , 2023 , 44 ( 8 ): 1 - 13 .
CHEN X L , ZAN D G , WU B C , et al . Adversarial sample generation algorithm for vertical federated learning [J ] . Journal on Communications , 2023 , 44 ( 8 ): 1 - 13 .
CHEN Y D , SU L L , XU J M . Distributed statistical machine learning in adversarial settings: Byzantine gradient descent [J ] . Proceedings of the ACM on Measurement and Analysis of Computing Systems , 2017 , 2 ( 1 ): 1 - 25 .
HE Y Z , MENG G Z , CHEN K , et al . Towards security threats of deep learning systems: a survey [J ] . IEEE Transactions on Software Engineering , 2022 , 48 ( 5 ): 1743 - 1770 .
杨丽 , 朱凌波 , 于越明 , 等 . 联邦学习与攻防对抗综述 [J ] . 信息网络安全 , 2023 , 23 ( 12 ): 69 - 90 .
YANG L , ZHU L B , YU Y M , et al . Review of federal learning and offensive-defensive confrontation [J ] . Netinfo Security , 2023 , 23 ( 12 ): 69 - 90 .
高莹 , 陈晓峰 , 张一余 , 等 . 联邦学习系统攻击与防御技术研究综述 [J ] . 计算机学报 , 2023 , 46 ( 9 ): 1781 - 1805 .
GAO Y , CHEN X F , ZHANG Y Y , et al . A survey of attack and defense techniques for federated learning systems [J ] . Chinese Journal of Computers , 2023 , 46 ( 9 ): 1781 - 1805 .
GUO S W , ZHANG X , YANG F , et al . Robust and privacy-preserving collaborative learning: a comprehensive survey [J ] . arXiv Preprint , arXiv: 2112.10183 , 2021 .
顾育豪 , 白跃彬 . 联邦学习模型安全与隐私研究进展 [J ] . 软件学报 , 2023 , 34 ( 6 ): 2833 - 2864 .
GU Y H , BAI Y B . Survey on security and privacy of federated learning models [J ] . Journal of Software , 2023 , 34 ( 6 ): 2833 - 2864 .
陈学斌 , 任志强 , 张宏扬 . 联邦学习中的安全威胁与防御措施综述 [J ] . 计算机应用 , 2024 , 44 ( 6 ): 1663 - 1672 .
CHEN X B , REN Z Q , ZHANG H Y . Review on security threats and defense measures in federated learning [J ] . Journal of Computer Applications , 2024 , 44 ( 6 ): 1663 - 1672 .
WAN Y C , QU Y Y , NI W , et al . Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: a comprehensive survey [J ] . IEEE Communications Surveys and Tutorials , 2024 , 26 ( 3 ): 1861 - 1897 .
ROSZEL M , NORVILL R , STATE R . An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning [C ] // Proceedings of the International Conference on Modeling Decisions for Artificial Intelligence . Berlin : Springer , 2022 : 143 - 155 .
孙钰 , 刘霏霏 , 李大伟 , 等 . 联邦学习拜占庭攻击与防御研究综述 [J ] . 网络空间安全科学学报 , 2023 ( 1 ): 17 - 37 .
SUN Y , LIU F F , LI D W , et al . Survey on Byzantine attacks and defenses in federated learning [J ] . Journal of Cybersecurity , 2023 ( 1 ): 17 - 37 .
CAO X Y , LAI L F . Distributed gradient descent algorithm robust to an arbitrary number of Byzantine attackers [J ] . IEEE Transactions on Signal Processing , 2019 , 67 ( 22 ): 5850 - 5864 .
XIAO H , XIAO H , ECKERT C . Adversarial label flips attack on support vector machines [C ] // Proceedings of the 20th European Conference on Artificial Intelligence . New York : ACM Press , 2012 : 870 - 875 .
BLANCHARD P , EL MHAMDI E M , GUERRAOUI R , et al . Machine learning with adversaries: Byzantine tolerant gradient descent [C ] // Proceedings of the 31st International Conference on Neural Information Processing Systems . New York : ACM Press , 2017 : 118 - 128 .
KASYAP H , TRIPATHY S . Sine: similarity is not enough for mitigating local model poisoning attacks in federated learning [J ] . IEEE Transactions on Dependable and Secure Computing , 2024 , 21 ( 5 ): 4481 - 4494 .
ROSENFELD E , WINSTON E , RAVIKUMAR P , et al . Certified robustness to label-flipping attacks via randomized smoothing [J ] . arXiv Preprint , arXiv: 2002.03018 , 2020 .
XU J , WANG R , KOFFAS S , et al . More is better (mostly): on the backdoor attacks in federated graph neural networks [C ] // Proceedings of the 38th Annual Computer Security Applications Conference . New York : ACM Press , 2022 : 684 - 698 .
XU J , HUANG S L , SONG L Q , et al . Byzantine-robust federated learning through collaborative malicious gradient filtering [C ] // Proceedings of the 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS) . Piscataway : IEEE Press , 2022 : 1223 - 1235 .
AWAN S N , LUO B , LI F J . CONTRA: defending against poisoning attacks in federated learning [C ] // Proceedings of the 26th European Symposium on Research in Computer Security . Berlin : Springer , 2021 : 455 - 475 .
PRAKASH S , AVESTIMEHR A . Mitigating Byzantine attacks in federated learning [J ] . arXiv Preprint , arXiv: 2010.07541 , 2020 .
WEI K , LI J , DING M , et al . Covert model poisoning against federated learning: algorithm design and optimization [J ] . IEEE Transactions on Dependable and Secure Computing , 2024 , 21 ( 3 ): 1196 - 1209 .
YANG H , XI W , SHEN Y H , et al . RoseAgg: robust defense against targeted collusion attacks in federated learning [J ] . IEEE Transactions on Information Forensics and Security , 2024 , 19 : 2951 - 2966 .
ZHANG H , JIA J , CHEN J , et al . A3FL: adversarially adaptive backdoor attacks to federated learning [C ] // Proceedings of the 37th International Conference on Neural Information Processing Systems . New York : ACM Press , 2024 : 61213 - 61233 .
WU R H , CHEN X Y , GUO C , et al . Learning to invert: simple adaptive attacks for gradient inversion in federated learning [C ] // Proceedings of the 39th Conference on Uncertainty in Artificial Intelligence . New York : PMLR , 2023 . 2293 - 2303 .
JIN T S , FU Z H , MENG D , et al . FedPerturb: covert poisoning attack on federated learning via partial perturbation [C ] // Proceedings of the International Conference on Artificial Intelligence and Applications . Amsterdam : IOS Press , 2023 : 1172 - 1179 .
ZHANG H T , YAO Z M , ZHANG L Y , et al . Denial-of-service or fine-grained control: towards flexible model poisoning attacks on federated learning [J ] . arXiv Preprint , arXiv: 2304.10783 , 2023 .
SHEN S Q , TOPLE S , SAXENA P . Auror: defending against poisoning attacks in collaborative deep learning systems [C ] // Proceedings of the 32nd Annual Conference on Computer Security Applications . New York : ACM Press , 2016 : 508 - 519 .
GONG Z R , SHEN L Y , ZHANG Y J , et al . Agramplifier: defending federated learning against poisoning attacks through local update amplification [J ] . IEEE Transactions on Information Forensics and Security , 2024 , 19 : 1241 - 1250 .
ZHU C , ROOS S , CHEN L Y . LeadFL: client self-defense against model poisoning in federated learning [C ] // Proceedings of the International Conference on Machine Learning . New York : PMLR , 2023 . 43158 - 43180 .
GUO X T , WANG P F , QIU S , et al . FAST: adopting federated unlearning to eliminating malicious terminals at server side [J ] . IEEE Transactions on Network Science and Engineering , 2024 , 11 ( 2 ): 2289 - 2302 .
XIE C , CHEN M , CHEN P , et al . Crfl: certifiably robust federated learning against backdoor attacks [C ] // Proceedings of the International Conference on Machine Learning . New York : PMLR , 2021 . 11372 - 11382
WU L T , YUE M Q , ZHANG H B , et al . A network survivability evaluation method based on PDRR model [C ] // Proceedings of the 2023 International Conference on Electronics and Devices, Computational Science (ICEDCS) . Piscataway : IEEE Press , 2023 : 522 - 526 .
潘洁 , 刘爱洁 . 基于APPDRR模型的网络安全系统研究 [J ] . 电信工程技术与标准化 , 2009 , 22 ( 7 ): 27 - 30 .
PAN J , LIU A J . Study of APPDRR model-based network security system [J ] . Telecom Engineering Technics and Standardization , 2009 , 22 ( 7 ): 27 - 30 .
ZHANG Z X , CAO X Y , JIA J Y , et al . FLDetector: defending federated learning against model poisoning attacks via detecting malicious clients [C ] // Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining . New York : ACM Press , 2022 : 2545 - 2555 .
SUN J W , LI A , DIVALENTIN L , et al . FL-WBC: enhancing robustness against model poisoning attacks in federated learning from a client perspective [J ] . Advances in Neural Information Processing Systems , 2021 , 34 : 12613 - 12624 .
PAND A , MAHLOUJIFAR S , BHAGOJI A , et al . SparseFed: mitigating model poisoning attacks in federated learning with sparsification [C ] // Proceedings of the International Conference on Artificial Intelligence and Statistics . New York : PMLR , 2022 : 7587 - 7624 .
MOZAFFARI H , SHEJWALKAR V , HOUMANSADR A . Every vote counts: ranking-based training of federated learning to resist poisoning attacks [C ] // Proceedings of the 32nd USENIX Security Symposium . Berkeley : USENIX Association , 2023 : 1721 - 1738 .
CAO X Y , FANG M H , LIU J , et al . FLTrust: Byzantine-robust federated learning via trust bootstrapping [J ] . arXiv Preprint , arXiv: 2012.13995 , 2020 .
YIN D , CHEN Y D , RAMCHANDRAN K , et al . Byzantine-robust distributed learning: towards optimal statistical rates [J ] . arXiv Preprint , arXiv: 1803.01498 , 2018 .
MHAMDI E M E , GUERRAOUI R , ROUAULT S . The hidden vulnerability of distributed learning in Byzantium [J ] . arXiv Preprint , arXiv: 1802.07927 , 2018 .
PILLUTLA K , KAKADE S M , HARCHAOUI Z . Robust aggregation for federated learning [J ] . IEEE Transactions on Signal Processing , 2022 , 70 : 1142 - 1154 .
YAN G , WANG H , YUAN X , et al . DeFL: defending against model poisoning attacks in federated learning via critical learning periods awareness [C ] // Proceedings of the AAAI Conference on Artificial Intelligence . Palo Alto : AAAI Press , 2023 : 10711 - 10719 .
MUÑOZ-GONZÁLEZ L , CO K T , LUPU E C . Byzantine-robust federated machine learning through adaptive model averaging [J ] . arXiv Preprint , arXiv: 1909.05125 , 2019 .
LIU X Y , LI H W , XU G W , et al . Privacy-enhanced federated learning against poisoning adversaries [J ] . IEEE Transactions on Information Forensics and Security , 2021 , 16 : 4574 - 4588 .
MA Z R , MA J F , MIAO Y B , et al . ShieldFL: mitigating model poisoning attacks in privacy-preserving federated learning [J ] . IEEE Transactions on Information Forensics and Security , 2022 , 17 : 1639 - 1654 .
LI S Y , CHENG Y , WANG W , et al . Learning to detect malicious clients for robust federated learning [J ] . arXiv Preprint , arXiv: 2002.00211 , 2020 .
LI S Y , CHENG Y , LIU Y , et al . Abnormal client behavior detection in federated learning [J ] . arXiv Preprint , arXiv: 1910.09933 , 2019 .
JIANG Y F , ZHANG W W , CHEN Y X . Data quality detection mechanism against label flipping attacks in federated learning [J ] . IEEE Transactions on Information Forensics and Security , 2023 , 18 : 1625 - 1637 .
LI X Y , QU Z , ZHAO S Q , et al . LoMar: a local defense against poisoning attack on federated learning [J ] . IEEE Transactions on Dependable and Secure Computing , 2023 , 20 ( 1 ): 437 - 450 .
CAO X Y , JIA J Y , ZHANG Z X , et al . FedRecover: recovering from poisoning attacks in federated learning using historical information [C ] // Proceedings of the 2023 IEEE Symposium on Security and Privacy (SP) . Piscataway : IEEE Press , 2023 : 1366 - 1383 .
ZHANG L F , ZHU T Q , ZHANG H B , et al . FedRecovery: differentially private machine unlearning for federated learning frameworks [J ] . IEEE Transactions on Information Forensics and Security , 2023 , 18 : 4732 - 4746 .
YAN H N , ZHANG W J , CHEN Q , et al . RECESS vaccine for federated learning: proactive defense against model poisoning attacks [J ] . arXiv Preprint , arXiv: 2310.05431 , 2023 .
MCMAHAN H B , MOORE E , RAMAGE D , et al . Communication-efficient learning of deep networks from decentralized data [J ] . arXiv Preprint , arXiv: 1602.05629 , 2016 .
XIA Q , TAO Z Y , HAO Z J , et al . FABA: an algorithm for fast aggregation against Byzantine attacks in distributed neural networks [C ] // Proceedings of the 28th International Joint Conference on Artificial Intelligence . New York : ACM Press , 2019 : 4824 - 4830 .
SHEN L Y , KE Z H , SHI J Q , et al . SPEFL: efficient security and privacy-enhanced federated learning against poisoning attacks [J ] . IEEE Internet of Things Journal , 2024 , 11 ( 8 ): 13437 - 13451 .
HAN S , WU W , BUYUKATES B , et al . Kick bad guys out! zero-knowledge-proof-based anomaly detection in federated learning [J ] . arXiv Preprint , arXiv: 2310. 04055 . 2023 .
JIANG Y , SHEN J Y , LIU Z Y , et al . Towards efficient and certified recovery from poisoning attacks in federated learning [J ] . arXiv Preprint , arXiv: 2401.08216 , 2024 .
ZHANG X Y , LIU Q Y , BA Z J , et al . FLTracer: accurate poisoning attack provenance in federated learning [J ] . arXiv Preprint , arXiv: 2310.13424 , 2023 .
WANG J X , GUO S , XIE X , et al . Federated unlearning via class-discriminative pruning [C ] // Proceedings of the ACM Web Conference 2022 . New York : ACM Press , 2022 : 622 - 632 .
XIA H , XU S , PEI J M , et al . FedME2: memory evaluation & erase promoting federated unlearning in DTMN [J ] . IEEE Journal on Selected Areas in Communications , 2023 , 41 ( 11 ): 3573 - 3588 .
WU C , ZHU S C , MITRA P . Federated unlearning with knowledge distillation [J ] . arXiv Preprint , arXiv: 2201.09441 , 2022 .
刘晗 , 李凯旋 , 陈仪香 . 人工智能系统可信性度量评估研究综述 [J ] . 软件学报 , 2023 , 34 ( 8 ): 3774 - 3792 .
LIU H , LI K X , CHEN Y X . Survey on trustworthiness measurement for artificial intelligence systems [J ] . Journal of Software , 2023 , 34 ( 8 ): 3774 - 3792 .
WEI W Q , LIU L . Trustworthy distributed AI systems: robustness, privacy, and governance [J ] . ACM Computing Surveys , 2024 , 152 : 83 - 98 .
ZHANG Y F , ZENG D , LUO J L , et al . A survey of trustworthy federated learning with perspectives on security, robustness, and privacy [J ] . arXiv Preprint , arXiv: 2302.10637 , 2023 .
LU Z L , PAN H , DAI Y Y , et al . Federated learning with non-IID data: a survey [J ] . IEEE Internet of Things Journal , 2024 , 11 ( 11 ): 19188 - 19209 .
PENG B , CHI M M , LIU C . Non-IID federated learning via random exchange of local feature maps for textile IIoT secure computing [J ] . Science China Information Sciences , 2022 , 65 ( 7 ): 170302 .
ZHAO Y , LI M , LAI L Z , et al . Federated learning with non-IID data [J ] . arXiv Preprint , arXiv: 1806.00582 , 2018 .
马鑫迪 , 李清华 , 姜奇 , 等 . 面向Non-IID数据的拜占庭鲁棒联邦学习 [J ] . 通信学报 , 2023 , 44 ( 6 ): 138 - 153 .
MA X D , LI Q H , JIANG Q , et al . Byzantine-robust federated learning over Non-IID data [J ] . Journal on Communications , 2023 , 44 ( 6 ): 138 - 153 .
ZHAO R J , WANG Y J , XUE Z , et al . Semisupervised federated-learning-based intrusion detection method for Internet of Things [J ] . IEEE Internet of Things Journal , 2023 , 10 ( 10 ): 8645 - 8657 .
QIU P Y , ZHANG X H , JI S L , et al . Hijack vertical federated learning models as one party [J ] . IEEE Transactions on Dependable and Secure Computing , 2024 ( 99 ): 1 - 18 .
ZHANG J Q , HUA Y , WANG H , et al . FedALA: adaptive local aggregation for personalized federated learning [C ] // Proceedings of the AAAI Conference on Artificial Intelligence . Palo Alto : AAAI Press , 2023 : 11237 - 11244 .
MORAFAH M , VAHIDIAN S , WANG W J , et al . FLIS: clustered federated learning via inference similarity for non-IID data distribution [J ] . IEEE Open Journal of the Computer Society , 2023 , 4 : 109 - 120 .
WEN H , WU Y , HU J , et al . Communication-efficient federated learning on non-IID data using two-step knowledge distillation [J ] . IEEE Internet of Things Journal , 2023 , 10 ( 19 ): 17307 - 17322 .
ALSENANI Y , MISHRA R , AHMED K R , et al . FedSiKD: clients similarity and knowledge distillation: addressing non-i.i.d. and constraints in federated learning [J ] . arXiv Preprint , arXiv: 2402.09095 , 2024 .
CHEN H K , FRIKHA A , KROMPASS D , et al . FRAug: tackling federated learning with non-IID features via representation augmentation [C ] // Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV) . Piscataway : IEEE Press , 2023 : 4826 - 4836 .
YANG L , HUANG J M , LIN W Y , et al . Personalized federated learning on non-IID data via group-based meta-learning [J ] . ACM Transactions on Knowledge Discovery from Data , 2023 , 17 ( 4 ): 1 - 20 .
LI Z J , SUN Y C , SHAO J W , et al . Feature matching data synthesis for non-IID federated learning [J ] . IEEE Transactions on Mobile Computing , 2024 , 23 ( 10 ): 9352 - 9367 .
LAI J R , WANG T , CHEN C , et al . VFedAD: a defense method based on the information mechanism behind the vertical federated data poisoning attack [C ] // Proceedings of the 32nd ACM International Conference on Information and Knowledge Management . New York : ACM Press , 2023 : 1148 - 1157 .
0
浏览量
7
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构