浏览全部资源
扫码关注微信
北京航空航天大学电子信息工程学院,北京 100191
[ "刘婷婷(1982- ),女,陕西西安人,博士,北京航空航天大学副教授,主要研究方向为基于机器学习和无线大数据的干扰管理、资源规划和信息预测" ]
[ "罗义南(1995- ),男,辽宁丹东人,北京航空航天大学硕士生,主要研究方向为超密集网络中的分布式干扰协调" ]
[ "杨晨阳(1965- ),女,浙江杭州人,博士,北京航空航天大学教授、博士生导师,主要研究方向为基于机器学习、无线大数据的缓存、传输资源管理、超可靠低延时通信等" ]
网络出版日期:2020-07,
纸质出版日期:2020-07-25
移动端阅览
刘婷婷, 罗义南, 杨晨阳. 基于多智能体深度强化学习的分布式干扰协调[J]. 通信学报, 2020,41(7):38-48.
Tingting LIU, Yi’nan LUO, Chenyang YANG. Distributed interference coordination based on multi-agent deep reinforcement learning[J]. Journal on communications, 2020, 41(7): 38-48.
刘婷婷, 罗义南, 杨晨阳. 基于多智能体深度强化学习的分布式干扰协调[J]. 通信学报, 2020,41(7):38-48. DOI: 10.11959/j.issn.1000-436x.2020149.
Tingting LIU, Yi’nan LUO, Chenyang YANG. Distributed interference coordination based on multi-agent deep reinforcement learning[J]. Journal on communications, 2020, 41(7): 38-48. DOI: 10.11959/j.issn.1000-436x.2020149.
针对干扰网络中的文件下载业务,提出了一种基于多智能体深度强化学习的分布式干扰协调策略。所提策略能够在节点之间只需交互少量信息的条件下,根据干扰环境和业务需求的特点自适应调整传输策略。仿真结果表明,对于任意的用户数和业务需求,所提策略相对于未来信息预测理想时最优策略的用户满意度损失不超过11%。
A distributed interference coordination strategy based on multi-agent deep reinforcement learning was investigated to meet the requirements of file downloading traffic in interference networks.By the proposed strategy transmission scheme could be adjusted adaptively based on the interference environment and traffic requirements with limited amount of information exchanged among nodes.Simulation results show that the user satisfaction loss of the proposed strategy from the optimal strategy with perfect future information does not exceed 11% for arbitrary number of users and traffic requirements.
TENG Y , LIU M , YU F R , et al . Resource allocation for ultra-dense networks:a survey,some research issues and challenges [J ] . IEEE Communications Surveys & Tutorials , 2019 , 21 ( 3 ): 2134 - 2168 .
YAO C , YANG C , XIONG Z . Energy-saving predictive resource planning and allocation [J ] . IEEE Transactions on Communications , 2016 , 64 ( 12 ): 5078 - 5095 .
GUO K , LIU T , YANG C , et al . Interference coordination and resource allocation planning with predicted average channel gains for HetNets [J ] . IEEE Access , 2018 , 6 ( 1 ): 60137 - 60151 .
GOMADAM K , CADAMBE V R , JAFAR S A . Approaching the capacity of wireless networks through distributed interference alignment [J ] . IEEE Transactions on Information Theory , 2011 , 57 ( 6 ): 3309 - 3322 .
XU C , SHENG M , WANG X , et al . Distributed subchannel allocation for interference mitigation in OFDMA femtocells:a utility-based learning approach [J ] . IEEE Transactions on Vehicular Technology , 2014 , 64 ( 6 ): 2463 - 2475 .
WANG X , ZHANG H , TIAN Y , et al . Optimal distributed interference mitigation for small cell networks with non-orthogonal multiple access:a locally cooperative game [J ] . IEEE Access , 2018 , 6 ( 1 ): 63107 - 63119 .
GALINDO-SERRANO A , GIUPPONI L . Distributed Q-learning for aggregated interference control in cognitive radio networks [J ] . IEEE Transactions on Vehicular Technology , 2010 , 59 ( 4 ): 1823 - 1834 .
AMIRI R , MEHRPOUYAN H , FRIDMAN L , et al . A machine learning approach for power allocation in HetNets considering QoS [C ] // IEEE International Conference on Communications . Piscataway:IEEE Press , 2018 : 1 - 7 .
ZHANG Y , KANG C , MA T , et al . Power allocation in multi-cell networks using deep reinforcement learning [C ] // IEEE Vehicular Technology Conference . Piscataway:IEEE Press , 2018 : 1 - 6 .
ZHAO N , LIANG Y C , NIYATO D , et al . Deep reinforcement learning for user association and resource allocation in heterogeneous cellular networks [J ] . IEEE Transactions on Wireless Communications , 2019 , 18 ( 11 ): 5141 - 5152 .
XU Y , YU J , HEADLEY W C , et al . Deep reinforcement learning for dynamic spectrum access in wireless networks [C ] // IEEE Military Communications Conference . Piscataway:IEEE Press , 2018 : 1 - 6 .
NASIR Y S , GUO D . Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks [J ] . IEEE Journal on Selected Areas in Communications , 2019 , 37 ( 10 ): 2239 - 2250 .
YE H , LI G Y . Deep reinforcement learning for resource allocation in V2V communications [J ] . IEEE Transactions on Vehicular Technology , 2019 , 68 ( 4 ): 3163 - 3173 .
XIONG Z , ZHANG Y , NIYATO D , et al . Deep reinforcement learning for mobile 5G and beyond:Fundamentals,applications,and challenges [J ] . IEEE Vehicular Technology Magazine , 2019 , 14 ( 2 ): 44 - 52 .
LI H , GAO H , LYU T , et al . Deep Q-learning based dynamic resource allocation for self-powered ultra-dense networks [C ] // IEEE International Conference on Communications Workshops . Piscataway:IEEE Press , 2018 : 1 - 6 .
NGUYEN T T , NGUYEN N D , NAHAVANDI S . Deep reinforcement learning for multi-agent systems:a review of challenges,solutions and applications [J ] . arXiv preprint arXiv:1812.11794 , 2018
PROEBSTER M , KASCHUB M , WERTHMANN T , et al . Context-aware resource allocation for cellular wireless networks [J ] . EURASIP Journal on Wireless Communications and Networking , 2012 , 2012 ( 1 ): 216 - 235 .
SI J , BARTO A G , POWELL W B , et al . Reinforcement learning in large,high-dimensional state spaces [M ] . Piscataway : IEEE PressPress , 2004 .
MNIH V , KAVUKCUOGLU K , SILVER D , et al . Human-level control through deep reinforcement learning [J ] . Nature , 2015 , 518 ( 7540 ): 529 - 533 .
VAN H H , GUEZ A , SILVER D . Deep reinforcement learning with double Q-learning [C ] // Proceedings of AAAI Conference on Artificial Intelligence . Palo Alto:AAAI Press , 2015 : 2094 - 2100 .
3GPP TSG RAN.Further advancements for E-UTRA physical layer aspects [S ] .(2010-03)[2020-01-10 ] .
0
浏览量
1399
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构