Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unmanned aerial vehicle autonomous flight training method based on reinforcement learning and transfer learning

A technology of reinforcement learning and transfer learning, applied in the direction of integrated learning, adaptive control, instruments, etc., can solve the problems that the flight strategy cannot be applied and cannot deal with complex and changeable environments, so as to reduce adverse effects and make the algorithm more robust sticky effect

Pending Publication Date: 2021-08-20
NANJING UNIV
View PDF5 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Purpose of the invention: For the problem of autonomous flight control of UAVs, the situation that manual rule control cannot handle complex and changeable environments, and the unavoidable differences between the simulated environment and the real environment for UAV flight strategy training by reinforcement learning algorithms, resulting in The problem that the flight strategy cannot be applied to the real environment, the present invention provides a UAV autonomous flight training method based on reinforcement learning and transfer learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unmanned aerial vehicle autonomous flight training method based on reinforcement learning and transfer learning
  • Unmanned aerial vehicle autonomous flight training method based on reinforcement learning and transfer learning
  • Unmanned aerial vehicle autonomous flight training method based on reinforcement learning and transfer learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.

[0025] UAV autonomous flight training method based on reinforcement learning and transfer learning, collect flight data in the real environment, and learn the state transition model of the real environment; simultaneously train the UAV flight strategy and the inverse transfer model of the simulator environment in the simulator , and use the transfer model of the real environment and the inverse transfer model of the simulated environment to correct the flight maneuvers to be performed in the simula...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unmanned aerial vehicle autonomous flight training method based on reinforcement learning and transfer learning. The method comprises the following steps of: (1) creating an unmanned aerial vehicle simulator environment; (2) constructing an environment transfer model based on deep learning, and randomly initializing mapping therein; (3) constructing an A3C algorithm of reinforcement learning, and randomly initializing a flight strategy of the A3C algorithm; (4) constructing an environment inverse transfer model based on deep learning; (5) collecting flight data obtained through operating an unmanned aerial vehicle to fly in a real environment by an unmanned aerial vehicle operator and a strategy; (6) updating the environment transfer model based on the real flight data; (7) using and carrying out transfer learning based on action correction, correcting a flight strategy, and executing the corrected strategy in a simulator to obtain simulated flight data; and (8) based on the simulated flight data, updating the flight strategy by using the A3C algorithm, and updating the environment inverse transfer model at the same time, until the strategy converges, and finally obtaining a strategy as an initial flight strategy of the real unmanned aerial vehicle.

Description

technical field [0001] The invention relates to a UAV autonomous flight training method based on reinforcement learning and transfer learning, and belongs to the technical field of UAV autonomous flight control. Background technique [0002] Autonomous flight control of UAVs in diverse, complex, and rapidly changing environments has always been a difficult point in the field of UAV flight control. Traditional flight control manually writes flight control rules, that is, considers all possible situations encountered during the flight of the UAV in advance, and then combines the professional knowledge and experience of experts in the UAV field, through feedback control, writing rules, etc. - Handle the enumerated cases. However, first of all, the writing of rules requires a lot of labor costs; secondly, the interaction between various situations, if some situations or the interaction between situations are not taken into account, it will lead to the failure of flight control;...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G05B13/04G06N3/04G06N20/20
CPCG05B13/042G06N20/20G06N3/045
Inventor 俞扬詹德川周志华黄军富庞竟成张云天管聪陈雄辉
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products