Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unmanned aerial vehicle obstacle avoidance method based on deep reinforcement learning

A technology of reinforcement learning and unmanned aerial vehicles, applied in the direction of vehicle position/route/height control, non-electric variable control, instruments, etc., can solve the problem of unstable training process, and achieve the problem of unstable training, applicability and reliability Scalable Effects

Pending Publication Date: 2022-02-25
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Reinforcement learning has been applied to the obstacle avoidance problem of drones. Since obstacle avoidance of drones is a problem in continuous space, it is necessary to combine neural networks to assign values ​​to each state-action pair, but learning combined with neural networks is prone to training problems. In case of unstable process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unmanned aerial vehicle obstacle avoidance method based on deep reinforcement learning
  • Unmanned aerial vehicle obstacle avoidance method based on deep reinforcement learning
  • Unmanned aerial vehicle obstacle avoidance method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to enable those skilled in the art to better understand the technical solutions of the present invention, the present invention will be further described in detail below in conjunction with specific embodiments.

[0029] A UAV obstacle avoidance method based on deep reinforcement learning of the present invention, the method flow chart is as follows figure 1 As shown, the UAV is flying in an environment containing unknown obstacles. After the action is selected according to the greedy strategy, a new state will be generated after the execution of the action interacts with the environment and the reward generated by the state change will be calculated. The algorithm will execute the action of the UAV The previous state, the action taken, the reward obtained, and the state after executing the action are stored in positive and negative experience pools according to the size of the reward value. The algorithm draws samples from the two experience pools to form train...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unmanned aerial vehicle obstacle avoidance method based on deep reinforcement learning, and the method comprises the following steps: 1), building an unmanned aerial vehicle obstacle avoidance flight model in a three-dimensional space, and randomly generating the number and position of obstacles, and the starting point of an unmanned aerial vehicle; (2) establishing an environment model based on a Markov process framework, (3) selecting actions based on states and strategies, enabling the unmanned aerial vehicle to interact with the environment to generate a new state after taking the actions, calculating an obtained reward, forming quaternions by the states, the actions, the reward and the actions at the next moment, and storing the quaternions in a sample space through an improved method for sample sampling training; 4) performing network updating on a sample obtained by sampling the environment model by adopting an improved DDQN algorithm, and assigning a state-action pair of the sample; and 5) selecting an optimal action according to the assignment of each action in the state in the sample, and further obtaining an optimal strategy. The invention provides the reinforcement learning obstacle avoidance method adopting the segmentation sampling pool, and the training efficiency of the generation strategy is improved.

Description

technical field [0001] The invention belongs to the technical field of intelligent decision-making, and in particular relates to an obstacle avoidance method for unmanned aerial vehicles based on deep reinforcement learning. Background technique [0002] As UAVs play an increasingly important role in military warfare and civilian fields, UAVs are required to fly autonomously to complete tasks without human intervention in a variety of mission scenarios. Therefore, finding a suitable method to solve the problem of obstacle avoidance when UAVs fly autonomously can improve the success rate of UAV missions to a certain extent. Traditional obstacle avoidance methods, such as artificial potential field method, visual graph method, and particle swarm optimization algorithm, are very mature, but they need to establish different models according to different situations. However, in the actual flying environment of UAVs, UAVs are often required to detect and make real-time decisions ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G05D1/08G05D1/10
CPCG05D1/0808G05D1/101Y02T10/40
Inventor 曹红波赵启刘亮甄子洋
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products