Unmanned aerial vehicle strong-robustness attitude control method based on deep reinforcement learning

A reinforcement learning, strong and robust technology, applied in the direction of attitude control, vehicle position/route/height control, non-electric variable control, etc., can solve problems such as large amount of calculation, controller jitter or even divergence, complex theory, etc., to achieve Improving adaptability and response speed, weakening the quantization problem of the controller, and improving the effect of generalization ability

Inactive Publication Date: 2022-03-25
南通因诺航空科技有限公司
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For example: CN113485437A uses neural network to adjust PID parameters to adapt to different flight environments, but when the drone is in a dynamically changing environment, the controller will vibrate or even diverge; CN111857171B uses the state equation to construct a neural network to solve the optimal solution, but in some In a nonlinear complex environment, the control effect is not good for objects with strong inertia; CN113359440A uses implicit dynamics to convert the control problem of unmanned aerial vehicles into the control input parameters for solving time-varying second-order systems, but the method is theoretically complex and the amount of calculation is large , and when the environment has strong time-varying characteristics, the control effect may produce serious hysteresis oscillations
Therefore, most traditional control algorithms design controllers based on digital six-degree-of-freedom models, but due to environmental errors between the digital model and the real environment, the portability and control effect of traditional algorithms are greatly reduced

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unmanned aerial vehicle strong-robustness attitude control method based on deep reinforcement learning
  • Unmanned aerial vehicle strong-robustness attitude control method based on deep reinforcement learning
  • Unmanned aerial vehicle strong-robustness attitude control method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The present invention will be further described in detail below in conjunction with the examples, which are explanations of the present invention rather than limitations.

[0047] The present invention provides a method for controlling the strong and robust attitude of UAVs based on deep reinforcement learning, including the following operations:

[0048] 1) Collect aircraft flight data and simulated flight data, including aircraft status st with action a t Corresponding state s t+1 data flow;

[0049] Add set weights to real flight data and simulated flight data to form a digital model of the aircraft;

[0050] Then normalize and preprocess each state quantity of the aircraft in the digital model to a dimensionless value between 0 and 1;

[0051] 2) The digital model of the aircraft after the pretreatment is used as the input of the Bayesian neural network, and the network weight distribution is randomly initialized, and the aircraft dynamics model introduced by the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an unmanned aerial vehicle strong-robustness attitude control method based on deep reinforcement learning, and proposes that a Bayesian probability model is used to better simulate interference and uncertainty in a real flight environment, and a fitted aircraft dynamics model is used as the input of a reinforcement learning framework based on a DDPG algorithm. And through random extraction of the aircraft digital model and interaction of various flight data acquired by the aircraft real flight data, the Shenzhou network parameters are updated. The output is an aircraft rudder mechanism which comprises a rudder, an elevator and an aileron. According to the invention, the Bayesian neural network can improve the accuracy of the aircraft model, so that the aircraft model is closer to a real flight environment; the control system based on the neural network can improve the control effect of the aircraft in various interference environments by utilizing generalization ability; moreover, the controller after offline training can be rapidly transplanted to various airborne platforms, and is very high in practical value.

Description

technical field [0001] The invention belongs to the technical field of attitude control of unmanned aerial vehicles, and relates to a strong and robust attitude control method for unmanned aerial vehicles based on deep reinforcement learning. Background technique [0002] In recent years, the control technology of fixed-wing UAVs has become mature. Traditional UAV attitude control systems, such as PID / sliding mode control and its optimization variables, have shown excellent performance in many situations that are only in a steady state. For example: CN113485437A uses neural network to adjust PID parameters to adapt to different flight environments, but when the drone is in a dynamically changing environment, the controller will vibrate or even diverge; CN111857171B uses the state equation to construct a neural network to solve the optimal solution, but in some In a nonlinear complex environment, the control effect is not good for objects with strong inertia; CN113359440A us...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G05D1/08
CPCG05D1/0833
Inventor 呼卫军全家乐
Owner 南通因诺航空科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products