浏览全部资源
扫码关注微信
1. 苏州大学计算机科学与技术学院,江苏 苏州 215006
2. 软件新技术与产业化协同创新中心,江苏 南京 210000
3. 吉林大学符号计算与知识工程教育部重点实验室,吉林 长春 130012
[ "章鹏(1992-),男,江苏仪征人,苏州大学硕士生,主要研究方向为连续空间强化学习。" ]
[ "刘全(1969-),男,内蒙古牙克石人,苏州大学教授、博士生导师,主要研究方向为强化学习、智能信息处理和自动推理。" ]
[ "钟珊(1983-),女,湖南双峰人,苏州大学博士生,主要研究方向为机器学习和深度学习。" ]
[ "翟建伟(1992-),男,江苏盐城人,苏州大学硕士生,主要研究方向为深度学习和深度强化学习。" ]
[ "钱炜晟(1992-),男,江苏常熟人,苏州大学硕士生,主要研究方向为部分可观察马氏问题的近似规划方法。" ]
网络出版日期:2017-04,
纸质出版日期:2017-04-25
移动端阅览
章鹏, 刘全, 钟珊, 等. 增量式双自然策略梯度的行动者评论家算法[J]. 通信学报, 2017,38(4):166-177.
Peng ZHANG, Quan LIU, Shan ZHONG, et al. Actor-critic algorithm with incremental dual natural policy gradient[J]. Journal on communications, 2017, 38(4): 166-177.
章鹏, 刘全, 钟珊, 等. 增量式双自然策略梯度的行动者评论家算法[J]. 通信学报, 2017,38(4):166-177. DOI: 10.11959/j.issn.1000-436x.2017089.
Peng ZHANG, Quan LIU, Shan ZHONG, et al. Actor-critic algorithm with incremental dual natural policy gradient[J]. Journal on communications, 2017, 38(4): 166-177. DOI: 10.11959/j.issn.1000-436x.2017089.
针对强化学习中已有连续动作空间算法未能充分考虑最优动作的选取方法和利用动作空间的知识,提出一种对自然梯度进行改进的行动者评论家算法。该算法采用最大化期望回报作为目标函数,对动作区间上界和下界进行加权来求最优动作,然后通过线性函数逼近器来近似动作区间上下界的权值,将最优动作求解转换为对双策略参数向量的求解。为了加快上下界的参数向量学习速率,设计了增量的Fisher信息矩阵和动作上下界权值的资格迹,并定义了双策略梯度的增量式自然行动者评论家算法。为了证明该算法的有效性,将该算法与其他连续动作空间的经典强化学习算法在3个强化学习的经典测试实验中进行比较。实验结果表明,所提算法具有收敛速度快和收敛稳定性好的优点。
The existed algorithms for continuous action space failed to consider the way of selecting optimal action and utilizing the knowledge of the action space
so an efficient actor-critic algorithm was proposed by improving the natural gradient.The objective of the proposed algorithm was to maximize the expected return.Upper and the lower bounds of the action range were weighted to obtain the optimal action.The two bounds were approximated by linear function.Afterward
the problem of obtaining the optimal action was transferred to the learning of double policy parameter vectors.To speed the learning
the incremental Fisher information matrix and the eligibilities of both bounds were designed.At three reinforcement learning problems
compared with other representative methods with continuous action space
the simulation results show that the proposed algorithm has the advantages of rapid convergence rate and high convergence stability.
SUTTON R S , BARTO A G . Reinforcement learning:an introduction [M ] . Cambridge Massachusetts : MIT pressPress , 1998 .
BUSONIU L , BABUSKA R , SCHUTTER B D , et al . Reinforcement learning and dynamic programming using function approximators [M ] . Florida : CRC PressPress , 2010 .
LEE D , SEO H , JUNG M W . Neural basis of reinforcement learning and decision making [J ] . Annual Review of Neuroscience , 2012 , 35 ( 5 ): 287 - 308 .
WIERING M , VAN O M . Reinforcement learning:STATE-OFTHE-Art [M ] . Berlin Heidelberg:Springer . 2014 .
SUTTON R S , MCALLESTER D A , SINGH S P , et al . Policy gradient methods for reinforcement learning with function approximation [C ] // NIPS . 1999 , 99 : 1057 - 1063 .
PETERS J , SCHAAL S . NATURAL A C [J ] . Neurocomputing , 2008 , 71 ( 7-9 ): 1180 - 1190 .
PETERS J , VIJAYAKUMAR S , SCHAAL S . Reinforcement learning for humanoid robotics [J ] . Autonomous Robot , 2003 , 12 ( 1 ): 1 - 20 .
VAN H H . Reinforcement learning in continuous state and action spaces [M ] . Reinforcement Learning . Springer Berlin Heidelberg , 2012 : 207 - 251 .
WIERSTRA D , SCHAUL T , PETERS J , et al . Natural evolution strategies [C ] // 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence) . 2008 : 3381 - 3387 .
SUN Y , WIERSTRA D , SCHAUL T , et al . Efficient natural evolution strategies [C ] // The 11th Annual Conference on Genetic and Evolutionary Computation . 2009 : 539 - 546 .
RUBINSTEIN R Y , KROESE D P . The cross-entropy method [J ] . Information Science & Statistics , 2008 , 50 ( 1 ): 92 - 92 .
BOTEV Z I , KROESE D P , RUBINSTEIN R Y , et al . The cross-entropy method for optimization [J ] . Machine Learning:Theory and Applications,Chennai:Elsevier BV , 2013 , 31 : 35 - 59 .
MARTIN H J A , DE LOPE J . x<α>:an effective algorithm for continuous actions reinforcement learning problems [C ] // The IEEE Industrial Electronics Society . 2009 : 2063 - 2068 .
LILLICRAP T P , HUNT J J , PRITZEL A , et al . Continuous control with deep reinforcement learning [J ] . Computer Science , 2015 , 8 ( 6 ): A187 .
GU S , LILLICRAP T , SUTSKEVER I , et al . Continuous deep Q-learning with model-based acceleration [J ] . arXiv Preprint arXiv:1603.00748 , 2016 .
KHAMASSI M , TZAFESTAS C . Active exploration in parameterized reinforcement learning [J ] . arXiv preprint arXiv:1610 , 2016 .
BHATNAGAR S , GHAVAMZADEH M , LEE M , et al . Incremental natural actor-critic algorithms [C ] // Advances in neural information processing systems . 2007 : 105 - 112 .
VIJAY R , KONDA , JOHN N. . Tsitsiklis.actor-critic algorithms [J ] . Siam Journal on Control & Optimization , 2001 , 42 ( 4 ): 1008 - 1014 .
BERENJI H R , KHEDKAR P . Learning and tuning fuzzy logic controllers through reinforcements [J ] . IEEE Transactions on Neural Networks , 1992 , 3 ( 5 ): 724 - 740 .
SINGH S P , SUTTON R S . Reinforcement learning with replacing eligibility traces [J ] . Machine Learning , 1996 , 22 ( 1-3 ): 123 - 158 .
SUTTON R S . Generalization in reinforcement learning:successful examples using sparse coarse coding [J ] . Neural Information Processing Systems , 1996 : 1038 - 1044 .
0
浏览量
801
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构