Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning

A reinforcement learning and planning technology, applied in neural learning methods, stochastic CAD, design optimization/simulation, etc., can solve problems such as difficulty in solving, large amount of calculation, inability to meet real-time decision-making, etc., to achieve good training effect and improve exploration ability. Effect

Active Publication Date: 2021-12-03
NAT UNIV OF DEFENSE TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] The purpose of the present invention is to provide a multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning, so as to solve the problem of multi-aircraft cooperative air combat planning in the prior art in the real air combat scenario where incomplete information is confronted, the opponent's strategy is unknown and changes in real time. When the system makes air combat decisions, the amount of calculation is large, the solution is difficult, and it cannot meet the technical problems of real-time decision-making

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning
  • Multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning
  • Multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] The technical solutions of the present invention will be clearly and completely described below in conjunction with the accompanying drawings. Apparently, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0050] Such as Figure 4 The specific battlefield situation information map of the experimental scenario environment is shown. In this scenario, the red and blue forces are configured equally, each containing 3 fighter jets and a base that can take off and land aircraft. The scope of the scenario is a rectangular high sea with a length of 1400 kilometers and a width of 1000 kilometers area. The process of the scenario deduction is that the aircraft takes off from the base, escorts the own base, and destroys t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention proposes a multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning. By treating fighter aircraft as an agent, a reinforcement learning agent model is constructed, and the network model is trained through a centralized training-distributed execution architecture, which overcomes the In the case of multi-machine collaboration, due to the small degree of distinction between actions between different entities, the network model is not very exploratory. By embedding expert experience in the reward value, the problem of requiring a large amount of expert experience in the prior art is solved. Through the experience sharing mechanism, all agents share a set of network parameters and experience playback library, which solves the problem that the strategy of a single agent not only depends on its own strategy and feedback from the environment, but also is affected by the behavior and cooperation of other agents. By increasing the sampling probability of samples with larger absolute values ​​of advantages, samples with extremely large or extremely small reward values ​​can affect the training of the neural network and speed up the convergence of the algorithm. Improve the agent's exploration ability by adding policy entropy.

Description

technical field [0001] The invention belongs to the technical field of space cooperative combat, and in particular relates to a multi-machine cooperative air combat planning method and system based on deep reinforcement learning. Background technique [0002] Since the 1990s, the development of information technology has promoted military reform. The traditional combat style in which each platform uses its own sensors and weapon systems to detect, track, and strike targets can no longer meet the needs of digital warfare. Facing the increasingly complex battlefield environment in modern warfare, a single fighter has limited ability to detect, track and attack targets, and cannot independently complete the designated air-to-air combat missions. Therefore, multiple fighters need to cooperate to maximize combat effectiveness. [0003] Multi-aircraft coordinated air combat refers to a way of warfare in which two or more combat aircraft cooperate and cooperate with each other to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F30/27G06N3/04G06N3/08G06F111/08
CPCG06F30/27G06N3/084G06F2111/08G06N3/045
Inventor 冯旸赫程光权施伟黄魁华黄金才刘忠
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products