A Self-Supervised Learning Fusion Method for Multiband Images

A fusion method and supervised learning technology, applied in the field of image fusion, can solve problems such as lack of labeled images and limited fusion results

Active Publication Date: 2022-07-05
ZHONGBEI UNIV
View PDF20 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problem of limited fusion results due to the lack of label images when using deep learning methods in the field of image fusion to fuse multi-band images, the present invention proposes a new method for self-supervised learning and fusion of multi-band images based on multi-discriminator generation confrontation network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Self-Supervised Learning Fusion Method for Multiband Images
  • A Self-Supervised Learning Fusion Method for Multiband Images
  • A Self-Supervised Learning Fusion Method for Multiband Images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The self-supervised learning fusion method of multi-band images based on multi-discriminators includes the following steps:

[0028] The first step is to design and build a generative adversarial network: design and build a multi-discriminator generative adversarial network structure. The multi-discriminator generative adversarial network consists of a generator and multiple discriminators; taking n-band image fusion as an example, a generator and n discriminators.

[0029] The generator network structure consists of a feature enhancement module and a feature fusion module. The feature enhancement module is used to extract the features of the source images of different bands and enhance them to obtain multi-channel feature maps of each band. The feature fusion module uses the merge connection layer in the channel. Feature connection is performed in dimension and the connected feature map is reconstructed into a fusion image, as follows:

[0030] The feature enhancement...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a multi-band image fusion method, in particular to a multi-band image self-supervised fusion method based on a multi-discriminator generative confrontation network. The method is carried out according to the following steps: designing and constructing a generative confrontation network. It is composed of a generator, and the label image is the multi-band source image itself; the generator network structure is composed of two parts, a feature enhancement module and a feature fusion module, and a generative model is obtained through the dynamic balance training of the generator and the discriminator, and the multi-band image is obtained. Fusion results. The invention realizes the neural network of end-to-end self-supervision fusion of multi-band images, and the result has better clarity, amount of information, richer detailed information, and is more in line with the visual characteristics of human eyes.

Description

technical field [0001] The invention relates to an image fusion method, in particular to a multi-band image fusion method, in particular to a multi-band image self-supervised learning fusion method. Background technique [0002] At present, wide-spectrum multi-band imaging has been widely used in high-precision detection systems, and the existing research is mainly carried out on the infrared and visible light bands. Therefore, it is imminent to explore the simultaneous fusion of multiple (≥3) images. In recent years, the research on image fusion based on deep artificial neural network has emerged. However, due to the lack of standard fusion results in the field of image fusion, that is, the use of deep learning to establish image fusion models generally lacks labeled data, resulting in difficulties in deep learning training or poor fusion effect. The larger the number of synchronously fused images, the more prominent the problem is. [0003] Self-supervised learning is one...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T5/50G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/20221G06N3/045G06F18/256G06F18/253G06F18/24G06F18/214
Inventor 蔺素珍田嵩旺禄晓飞李大威李毅王丽芳
Owner ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products