浏览全部资源
扫码关注微信
1. 重庆邮电大学计算机科学与技术学院,重庆 400065
2. 重庆大学计算机学院,重庆 400044
3. 香港城市大学电机工程系,香港 999077
4. 旅游多源数据感知与决策技术文化和旅游部重点实验室,重庆 400065
[ "李开菊(1992- ),女,土家族,湖北恩施人,重庆大学博士生,主要研究方向为联邦学习、隐私保护" ]
[ "许强(1992- ),男,江西赣州人,博士,香港城市大学在站博士后,主要研究方向为视频安全、图像处理等" ]
[ "王豪(1990- ),男,河南驻马店人,博士,重庆邮电大学副教授,主要研究方向为联邦学习、隐私保护" ]
网络出版日期:2023-05,
纸质出版日期:2023-05-25
移动端阅览
李开菊, 许强, 王豪. 冗余数据去除的联邦学习高效通信方法[J]. 通信学报, 2023,44(5):79-93.
Kaiju LI, Qiang XU, Hao WANG. Communication-efficient federated learning method via redundant data elimination[J]. Journal on communications, 2023, 44(5): 79-93.
李开菊, 许强, 王豪. 冗余数据去除的联邦学习高效通信方法[J]. 通信学报, 2023,44(5):79-93. DOI: 10.11959/j.issn.1000-436x.2023072.
Kaiju LI, Qiang XU, Hao WANG. Communication-efficient federated learning method via redundant data elimination[J]. Journal on communications, 2023, 44(5): 79-93. DOI: 10.11959/j.issn.1000-436x.2023072.
为了应对终端设备网络带宽受限对联邦学习通信效率的影响,高效地传输本地模型更新以完成模型聚合,提出了一种冗余数据去除的联邦学习高效通信方法。该方法通过分析冗余更新参数产生的本质原因,根据联邦学习中数据非独立同分布特性和模型分布式训练特点,给出新的核心数据集敏感度和损失函数容忍度定义,提出联邦核心数据集构建算法。此外,为了适配所提取的核心数据,设计了分布式自适应模型演化机制,在每次训练迭代前动态调整训练模型的结构和大小,在减少终端与云服务器通信比特数传输的同时,保证了训练模型的准确率。仿真实验表明,与目前最优的方法相比,所提方法减少了17%的通信比特数,且只有0.5%的模型准确率降低。
To address the influence of limited network bandwidth of edge devices on the communication efficiency of federated learning
and efficiently transmit local model update to complete model aggregation
a communication-efficient federated learning method via redundant data elimination was proposed.The essential reasons for generation of redundant update parameters and according to non-IID properties and model distributed training features of FL were analyzed
a novel sensitivity and loss function tolerance definitions for coreset was given
and a novel federated coreset construction algorithm was proposed.Furthermore
to fit the extracted coreset
a novel distributed adaptive sparse network model evolution mechanism was designed to dynamically adjust the structure and the training model size before each global training iteration
which reduced the number of communication bits between edge devices and the server while also guarantees the training model accuracy.Experimental results show that the proposed method achieves 17% reduction in communication bits transmission while only 0.5% degradation in model accuracy compared with state-of-the-art method.
RODRIGUES T K , SUTO K , KATO N . Edge cloud server deployment with transmission power control through machine learning for 6G Internet of things [J ] . IEEE Transactions on Emerging Topics in Computing , 2021 , 9 ( 4 ): 2099 - 2108 .
MCMAHAN B , MOORE E , RAMAGE D , et al . Communication-efficient learning of deep networks from decentralized data [C ] // Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) . Piscataway:IEEE Press , 2017 : 1273 - 1282 .
LI K J , XIAO C H . CBFL:a communication-efficient federated learning framework from data redundancy perspective [J ] . IEEE Systems Journal , 2022 , 16 ( 4 ): 5572 - 5583 .
SATTLER F , WIEDEMANN S , MÜLLER K R , et al . Robust and communication-efficient federated learning from non-i.i.d.data [J ] . IEEE Transactions on Neural Networks and Learning Systems , 2019 , 31 ( 9 ): 3400 - 3413 .
LI K J , XIAO C H . PBFL:communication-efficient federated learning via parameter predicting [J ] . The Computer Journal , 2023 , 66 ( 3 ): 626 - 642 .
KONECNY J , MCMAHAN H B , YU F X , et al . Federated learning:strategies for improving communication efficiency [J ] . arXiv Preprint,arXiv:1610.05492 , 2016 .
LI T , SAHU A K , TALWALKAR A , et al . Federated learning:challenges,methods,and future directions [J ] . IEEE Signal Processing Magazine , 2020 , 37 ( 3 ): 50 - 60 .
LI K J , XIAO C H . Federated learning communication-efficiency framework via corset construction [J ] . Computer Journal , 2022 ,doi:10.1093/comjnl/bxac062.
TELLEZ D , LITJENS G , LAAK J , et al . Neural image compression for gigapixel histopathology image analysis [J ] . IEEE Transactions on Pattern Analysis and Machine Intelligence , 2021 , 43 ( 2 ): 567 - 578 .
LI X , HUANG K X , YANG W H , et al . On the convergence of FedAvg on Non-IID data [C ] // Proceedings of the International Conference on Learning Representations (ICLR) . Piscataway:IEEE Press , 2020 : 1 - 26 .
TAO Z , LI Q . eSGD:communication efficient distributed deep learning on the edge [C ] // 2018 Hot Topics in Edge Computing (HotEdge 18) . Piscataway:IEEE Press , 2018 : 1 - 6 .
YU H , YANG S , ZHU S H . Parallel restarted SGD with faster convergence and less communication:demystifying why model averaging works for deep learning [C ] // Proceedings of the AAAI Conference on Artificial Intelligence . Palo Alto:AAAI Press , 2019 : 5693 - 5700 .
XU J J , DU W L , JIN Y C , et al . Ternary compression for communication-efficient federated learning [J ] . IEEE Transactions on Neural Networks and Learning Systems , 2022 , 33 ( 3 ): 1162 - 1176 .
LI S Q , QI Q , WANG J Y , et al . GGS:general gradient sparsification for federated learning in edge computing [C ] // Proceedings of 2020 IEEE International Conference on Communications (ICC) . Piscataway:IEEE Press , 2020 : 1 - 7 .
HAN P C , WANG S Q , LEUNG K K . Adaptive gradient sparsification for efficient federated learning:an online learning approach [C ] // Proceedings of 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS) . Piscataway:IEEE Press , 2021 : 300 - 310 .
OZFATURA E , OZFATURA K , GÜNDÜZ D . Time-correlated sparsification for communication-efficient federated learning [C ] // Proceedings of 2021 IEEE International Symposium on Information Theory (ISIT) . Piscataway:IEEE Press , 2021 : 461 - 466 .
ASAD M , MOUSTAFA A , ITO T . FedOpt:towards communication efficiency and privacy preservation in federated learning [J ] . Applied Sciences , 2020 , 10 ( 8 ): 2864 .
BERNSTEIN J , WANG Y , AZIZZADENESHELI K , et al . signSGD:compressed optimisation for non-convex problems [C ] // Proceedings of the 35th International Conference on Machine Learning . Piscataway:IEEE Press , 2018 : 560 - 569 .
REISIZADEH A , MOKHTARI A , HASSANI H , et al . FedPAQ:a communication-efficient federated learning method with periodic averaging and quantization [C ] // Proceedings of the 23rdInternational Conference on Artificial Intelligence and Statistics (AISTATS) . Piscataway:IEEE Press , 2020 : 2021 - 2031 .
AMIRI M M , GUNDUZ D , KULKARNI S R , et al . Federated learning with quantized global model updates [J ] . arXiv Preprint,arXiv:2020.10672 , 2020 .
NORI M K , YUN S , KIM I M . Fast federated learning by balancing communication trade-offs [J ] . IEEE Transactions on Communications , 2021 , 69 ( 8 ): 5168 - 5182 .
JHUNJHUNWALA D , GADHIKAR A , JOSHI G , et al . Adaptive quantization of model updates for communication-efficient federated learning [C ] // Proceedings of 2021 IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP) . Piscataway:IEEE Press , 2021 : 3110 - 3114 .
DU Y , YANG S , HUANG H . High-dimensional stochastic gradient quantization for communication-efficient edge learning [C ] // Proceed ings of 2019 IEEE Global Conference on Signal and Information Processing (Global SIP) . Piscataway:IEEE Press , 2019 : 1 - 5 .
LIAN Z , CAO Z , ZUO Y , et al . AGQFL:communication-efficient federated learning via automatic gradient quantization in edge heterogeneous systems [C ] // Proceedings of 2021 IEEE 39th International Conference on Computer Design (ICCD) . Piscataway:IEEE Press , 2021 : 551 - 558 .
BĀDOIU M , HAR-PELED S , INDYK P . Approximate clustering via core-sets [C ] // Proceedings of the 34th Annual ACM Symposium on Theory of Computing . New York:ACM Press , 2002 : 250 - 257 .
VLADIMIR B , DAN F , HARRY L , et al . Efficient coreset constructions via sensitivity sampling [C ] // Proceedings of the 13th Asian Con ference on Machine Learning . Piscataway:IEEE Press , 2021 : 948 - 963 .
LU H L , LI M J , HE T , et al . Robust coreset construction for distributed machine learning [J ] . IEEE Journal on Selected Areas in Communications , 2020 , 38 ( 10 ): 2400 - 2417 .
CAMPBELL T , BRODERICK T . Bayesian coreset construction via greedy iterative geodesic ascent [C ] // Proceedings of the 35th International Conference on Machine Learning . Piscataway:IEEE Press , 2018 : 698 - 706 .
FAN Y W , LI H S . Communication efficient coreset sampling for distributed learning [C ] // Proceedings of 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) . Piscataway:IEEE Press , 2018 : 1 - 5 .
MOCANU D C , MOCANU E , STONE P , et al . Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science [J ] . Nature Communications , 2018 ,9:2383.
LI A , SUN J , WANG B , et al . LotteryFL:personalized and communication-efficient federated learning with lottery ticket hypothesis on non-IID datasets [J ] . arXiv Preprint,arXiv:2008.03371 , 2020 .
0
浏览量
372
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构