Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A continuous reinforcement learning system and method based on stochastic differential equations

A stochastic differential equation and reinforcement learning technology, applied in the field of reinforcement learning for continuous systems, which can solve problems such as uncontrollable variance and failure to meet continuity conditions.

Active Publication Date: 2021-04-06
SHANGHAI UNIV
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, most of the current continuous reinforcement learning methods have their shortcomings in theory. For example, although the noise introduced by DDPG can guarantee the continuity of the action, it cannot control the variance; and A3C under the Gaussian strategy, although it can control the variance, it cannot control the variance. Does not satisfy the continuity condition in theory

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A continuous reinforcement learning system and method based on stochastic differential equations
  • A continuous reinforcement learning system and method based on stochastic differential equations
  • A continuous reinforcement learning system and method based on stochastic differential equations

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] In order to make the purpose, technical solution and advantages of the present invention clearer, a continuous reinforcement learning system and method based on stochastic differential equations of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0046] The invention proposes a continuous reinforcement learning system and method based on stochastic differential equations, which are suitable for continuous control applications.

[0047] Such as figure 1 As shown, the step process of a continuous reinforcement learning method based on stochastic differential equations proposed by the present invention includes the following steps:

[0048] Step 1, initialize all parameters in the action policy generator APG, environment state estimator ESE, value estimator VE, memory storage module MS and external environment EE included in the whole learning method.

[0049] The present invention takes the Pendulum-v0...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a continuous reinforcement learning system and method based on stochastic differential equations. The system includes an action policy generator APG, an environment state estimator ESE, a value estimator VE, a memory storage module MS and an external environment EE; the specific steps are as follows : Initialize the action policy generator APG, the environment state estimator ESE and the value estimator VE; the action policy generator APG calculates the output action value increment Δa k ;The external environment EE outputs the next action value a k+1 , next step environment state value s k+1 and the current step reward value R k , and stored in the memory storage module MS; the environmental state estimator ESE updates the environmental state parameter set θ p and predicted future environmental state estimates s′ k ; The VE optimizer updates the Q-function network and predicts the future reward estimate R′ k ; The APG optimizer updates the action value parameter set θ v . This method is based on stochastic differential equations as the basic model, which can realize the continuity of action control and control the variance of the training process, and can select actions by predicting changes in the environment to achieve better environmental interaction.

Description

technical field [0001] The invention relates to the fields of reinforcement learning and stochastic process, and in particular relates to a reinforcement learning method for continuous systems. Background technique [0002] Deep reinforcement learning is an end-to-end learning system that combines the perception ability of deep learning with the decision-making ability of reinforcement learning, has strong versatility, and realizes direct control from raw input to output. Reinforcement learning has become a very important unsupervised learning method, which enables the agent to judge the current environment state through the value function in the interaction with the environment, and thus make corresponding actions to obtain better rewards. At present, reinforcement learning algorithms mainly focus on discrete action policy sets, while classic continuous reinforcement learning methods such as DDPG and A3C can be used for continuous motion control in applications such as robo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/13G06K9/62
CPCG06F17/13G06F18/295G06F18/214
Inventor 贾文川程丽梅陈添豪孙翊马书根
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products