Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Mixed-Experience Multi-Agent Reinforcement Learning Method for Motion Planning

A multi-agent and motion planning technology, applied in the field of deep learning, can solve the problems of being difficult to apply to dynamic and complex environments, poor training stability, and not caring about the environment, and achieve the effects of accelerating training speed, good training skills, and reducing update frequency

Active Publication Date: 2022-02-25
NORTHWESTERN POLYTECHNICAL UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The artificial potential field method has a simple and effective obstacle avoidance planning strategy, but there are problems of local minima and planning that are difficult to apply to dynamic and complex environments; the MADDPG algorithm does not care about the complexity of the environment and has the characteristics of autonomous learning, but it has convergence difficulties and training problems. The problem of poor stability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Mixed-Experience Multi-Agent Reinforcement Learning Method for Motion Planning
  • A Mixed-Experience Multi-Agent Reinforcement Learning Method for Motion Planning
  • A Mixed-Experience Multi-Agent Reinforcement Learning Method for Motion Planning

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0178] 1. Establish a stochastic game model for multi-agent motion planning in complex environments.

[0179] This embodiment belongs to the problem of multi-agent reinforcement learning, and uses stochastic countermeasures as the environment model.

[0180] 1.1. Set the physical model of the agent and the obstacle. The schematic diagram of the model is as follows figure 2 shown.

[0181] The intelligent body is set as a round smart car, and the number is n, and n=5 is set in this embodiment. In the present invention, it is assumed that the physical models of all agents are the same, and for agent i, its radius is set to r i a =0.5m, the velocity is u i =1.0m / s, velocity angle ψ i Indicates the angle between the speed and the positive direction of the X-axis, and the range is (-π, π]. The target of agent i is set to the radius r i g = 1.0m circular area, the location is The distance from agent i is D(P i a ,P i g ). When D(P i a ,P i g )≤r i a + r i g , ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mixed-experience multi-agent reinforcement learning motion planning method, that is, ME‑MADDPG algorithm. This method is trained by the MADDPG algorithm. When generating samples, it not only generates experience through exploration and learning, but also increases the high-quality experience of successfully planning multiple UAVs to the target through the artificial potential field method, and stores these two kinds of experience in the Different experience pools. During training, the neural network collects samples from two kinds of experience pools with changing probabilities through dynamic sampling, takes each agent's own state information and environmental information as the input of the neural network, and takes the speed of the multi-agent as the output. At the same time, during the training process, the neural network is slowly updated, and the training of the multi-agent motion planning strategy is stably completed, and finally the multi-agents can autonomously avoid obstacles in a complex environment and reach their respective target positions smoothly. The invention can efficiently train a motion planning strategy with better stability and adaptability in a complex and dynamic environment.

Description

technical field [0001] The invention belongs to the technical field of deep learning, and in particular relates to a multi-agent reinforcement learning motion planning method. Background technique [0002] With the vigorous development of scientific theory and science and technology, multi-agent systems are more and more widely used in people's daily production and life. Driving, etc. In these fields, it is necessary to use multi-agent motion planning technology. The multi-agent motion planning problem is a kind of problem of finding the optimal path set of multiple agents from the starting position to the target position without conflicts. How to make the agent avoid obstacles and other agents efficiently and reach the designated area has become a major research problem. [0003] The motion planning methods currently proposed by researchers can be generally divided into global path planning and local path planning. Although global path planning can efficiently and quickl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G05D1/02
CPCG05D1/0214G05D1/0221G05D1/0223
Inventor 万开方武鼎威高晓光
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products