Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Reinforced learning path planning algorithm based on potential field

A technology of reinforcement learning and path planning, applied in the field of reinforcement learning path planning algorithm based on potential field, which can solve problems such as difficult planning, large amount of calculation, and increased calculation amount of algorithm

Inactive Publication Date: 2020-02-14
BEIJING UNIV OF POSTS & TELECOMM
View PDF3 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method is simple and practical, but it is computationally intensive, so it is only suitable for simple environments
The quadtree method improves the grid map method and divides the environment map into four parts. The quadtree data structure enables the environment to be quickly modeled. However, when many small obstacles exist in the environment, the tree structure will deepen and the amount of calculation will be increased. increase accordingly
[0008] After the research on the existing robot path planning methods, it is found that the current path planning methods still have certain limitations in the face of complex dynamic environments.
When the magnitude of obstacles reaches a certain level, the calculation amount of the algorithm increases, the calculation efficiency decreases, and it is difficult to plan a better path

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Reinforced learning path planning algorithm based on potential field
  • Reinforced learning path planning algorithm based on potential field
  • Reinforced learning path planning algorithm based on potential field

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] According to the number n and position of obstacles in the environment and sports waste Use the artificial potential field method to define the environment model, and define the repulsive potential field at the obstacle i according to the above-mentioned repulsive potential field and gravitational potential field formulas The gravitational potential field U at the target position a (P);

[0037] Define the path planning problem under the potential field as a reinforcement learning problem, define the Markov decision-making process according to the above-mentioned state function formula, action function formula, and reward function formula, and use the DDPG algorithm to optimize the decision-making process;

[0038] Establish the main network model and target network model in the DDPG reinforcement learning algorithm. The main network model updates parameters according to the gradient, and the target network updates parameters in the form of soft update. The two net...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention, which belongs to the field of intelligent algorithm optimization, provides a reinforced learning robot path planning algorithm based on a potential field in a complex environment, thereby realizing robot path planning in a complex dynamic environment under the environmental condition that a large number of movable obstacles exist in a scene. The method comprises the following steps:modeling environment space by utilizing the traditional artificial potential field method; defining a state function, a reward function and an action function in a Markov decision-making process according to a potential field model, and training the state function, the reward function and the action function in a simulation environment by utilizing a reinforcement learning algorithm of a depth deterministic strategy gradient; and thus enabling a robot to have the decision-making capability of performing collision-free path planning in a complex obstacle environment. Experimental results showthat the method has advantages of short decision-making time, low system resource occupation, and certain robustness; and robot path planning under complex environmental conditions can be realized.

Description

technical field [0001] The invention belongs to the field of intelligent algorithm optimization and relates to a potential-field-based reinforcement learning path planning algorithm for complex dynamic environments. Background technique [0002] The path planning method refers to the method for the robot to plan an optimal path from the starting point to the target point under the premise of no obstacle collision. Path planning is an optimization problem that satisfies constraints. The optimization indicators usually include the shortest time, the best route, and the lowest energy consumption. The algorithm needs to have certain characteristics such as complexity, randomness, and multiple constraints. According to the algorithm model of path planning, it can be divided into traditional methods and intelligent methods. Commonly used traditional methods include grid method, artificial potential field method and topological space method, etc. Commonly used intelligent methods i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G05D1/02
CPCG05D1/0221G05D1/0223G05D1/0276
Inventor 褚明苗雨杨茂男穆新鹏尚明明
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products