浏览全部资源
扫码关注微信
1. 苏州大学计算机科学与技术学院,江苏 苏州 215006
2. 苏州大学江苏省计算机信息处理技术重点实验室,江苏 苏州 215006
3. 吉林大学符号计算与知识工程教育部重点实验室,吉林 长春 130012
4. 软件新技术与产业化协同创新中心,江苏 南京 210093
[ "刘全(1969- ),男,内蒙古牙克石人,博士,苏州大学教授、博士生导师,主要研究方向为智能信息处理、自动推理与机器学习。" ]
[ "姜玉斌(1994- ),男,江苏盐城人,苏州大学硕士生,主要研究方向为强化学习、深度强化学习。" ]
[ "胡智慧(1994- ),女,江苏徐州人,苏州大学硕士生,主要研究方向为强化学习、深度强化学习。" ]
网络出版日期:2019-05,
纸质出版日期:2019-05-25
移动端阅览
刘全, 姜玉斌, 胡智慧. 基于重要性采样的优势估计器[J]. 通信学报, 2019,40(5):108-116.
Quan LIU, Yubin JIANG, Zhihui HU. Advantage estimator based on importance sampling[J]. Journal on communications, 2019, 40(5): 108-116.
刘全, 姜玉斌, 胡智慧. 基于重要性采样的优势估计器[J]. 通信学报, 2019,40(5):108-116. DOI: 10.11959/j.issn.1000-436x.2019122.
Quan LIU, Yubin JIANG, Zhihui HU. Advantage estimator based on importance sampling[J]. Journal on communications, 2019, 40(5): 108-116. DOI: 10.11959/j.issn.1000-436x.2019122.
在连续动作任务中,深度强化学习通常采用高斯分布作为策略函数。针对高斯分布策略函数由于截断动作导致算法收敛速度变慢的问题,提出了一种重要性采样优势估计器(ISAE)。该估计器在通用优势估计器(GAE)的基础上,引入了重要性采样机制,通过计算边界动作的目标策略与行动策略比率修正截断动作带来的值函数偏差,提高了算法的收敛速度。此外,ISAE引入了L参数,通过限制重要性采样率的范围,提高了样本的可靠度,保证了网络参数的稳定。为了验证ISAE的有效性,将ISAE与近端策略优化结合并与其他算法在MuJoCo平台上进行比较。实验结果表明,ISAE具有更快的收敛速度。
In continuous action tasks
deep reinforcement learning usually uses Gaussian distribution as a policy function.Aiming at the problem that the Gaussian distribution policy function slows down due to the clipped action
an importance sampling advantage estimator was proposed.Based on the general advantage estimator
an importance sampling mechanism was introduced by the estimator to improve the convergence speed of the algorithm and correct the deviation of the value function caused by calculating the target strategy and action strategy ratio of the boundary action.In addition
the L parameter was introduced by ISAE which improved the reliability of the sample and limited the stability of the network parameters by limiting the range of the importance sampling rate.In order to verify the effectiveness of the ISAE
applying it to proximal policy optimization and comparing it with other algorithms on the MuJoCo platform.Experimental results show that ISAE has a faster convergence rate.
SUTTON R S , BARTO A G . Introduction to reinforcement learning [M ] . Cambridge : MIT pressPress , 1998 .
刘全 , 傅启明 , 龚声蓉 . 最小状态变元平均奖赏的强化学习方法 [J ] . 通信学报 , 2011 , 32 ( 1 ): 66 - 71 .
LIU Q , FU Q M , GONG S R . Reinforcement learning algorithm based on minimum state method and average reward [J ] . Journal on Communications , 2011 , 32 ( 1 ): 66 - 71 .
TANG J , DENG C , HUANG G B . Extreme learning machine for multilayer perceptron [J ] . IEEE Transactions on Neural Networks and Learning Systems , 2016 , 27 ( 4 ): 809 - 821 .
KRIZHEVSKY A , SUTSKEVER I , HINTON G E . Imagenet classification with deep convolutional neural networks [C ] // Advances in Neural Information Processing Systems . 2012 : 1097 - 1105 .
VEERIAH V , VAN S H , SUTTON R S . Forward actor-Critic for nonlinear function approximation in reinforcement learning [C ] // Conference on Autonomous Agents and Multiagent Systems . 2017 : 556 - 564 .
LECUN Y , BENGIO Y , HINTON G . Deep learning [J ] . Nature , 2015 , 521 ( 7553 ): 436 - 444 .
MNIH V , KAVUKCUOFLU K , SILVER D , et al . Human-level control through deep reinforcement learning [J ] . Nature , 2015 , 518 ( 7540 ): 529 - 533 .
MNIH V , BADIA A P , MIRZA M , et al . Asynchronous methods for deep reinforcement learning [C ] // International Conference on Machine Learning . 2016 : 1928 - 1937 .
VAN H , GUEZ A , SILVER D . Deep reinforcement learning with double Q-learning [C ] // Thirtieth AAAI Conference on Artificial Intelligence . 2016 : 2094 - 2100 .
WANG Z , SCHAUL T , HESSEL M , et al . Dueling network architectures for deep reinforcement learning [C ] // International Conference on Machine Learning . 2016 : 1995 - 2003 .
SAMEJIMA K , DOYA K , KAWATO M . Inter-module credit assignment in modular reinforcement learning [J ] . Neural Networks , 2003 , 16 ( 7 ): 985 - 994 .
SINGH S P , SUTTON R S . Reinforcement learning with replacing eligibility traces [J ] . Machine Learning , 1996 , 22 ( 1-3 ): 123 - 158 .
WATKINS C J C H . Learning from delayed rewards [D ] . Cambridge:King’s College , 1989 .
SUTTON R S . Temporal credit assignment in reinforcement learning [D ] . Amherst:University of Massachusetts , 1984 .
VAN S H , MAHMOOD A R , PILARSKI P M , et al . True online temporal-difference learning [J ] . The Journal of Machine Learning Research , 2016 , 17 ( 1 ): 5057 - 5096 .
HO J , ERMON S . Generative adversarial imitation learning [C ] // Advances in Neural Information Processing Systems . 2016 : 4565 - 4573 .
MNIH V , BADIA A P , MIRZA M , et al . Asynchronous methods for deep reinforcement learning [C ] // International Conference on Machine Learning . 2016 : 1928 - 1937 .
SCHULMAN J , LEVINE S , ABBEEL P , et al . Trust region policy optimization [C ] // International Conference on Machine Learning . 2015 : 1889 - 1897 .
CHUA K , CALANDRA R , MCALLISTER R , et al . Deep reinforcement learning in a handful of trials using probabilistic dynamics models [C ] // Advances in Neural Information Processing Systems . 2018 .
FUJITA Y , MAEDA S . Clipped action policy gradient [C ] // International Conference on Machine Learning . 2018 : 1592 - 1601 .
THODOROFF P , DURAND A , PINEAU J , et al . Temporal regularization for Markov decision process [C ] // Advances in Neural Information Processing Systems . 2018 : 1779 - 1789 .
DOYA K . Reinforcement learning in continuous time and space [J ] . Neural Computation , 2000 , 12 ( 1 ): 219 - 245 .
HESSEL M , MODAYIL J , VAN H H , et al . Rainbow:combining improvements in deep reinforcement learning [C ] // Thirty-Second AAAI Conference on Artificial Intelligence . 2018 : 3215 - 3222 .
0
浏览量
1076
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构