Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image fusion method based on self-learning neural unit

A technology of neural unit and image fusion, applied in the field of image fusion based on self-learning neural unit

Active Publication Date: 2019-08-02
JIANGNAN UNIV
View PDF2 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose a self-learning neural unit for image fusion to solve the problems of activity level measurement and fusion rule design, and drive the fusion neural unit to combine in an optimal way through the loss function Obtain activity level measurements and weight assignments to enhance image clarity, improve visuals, and improve the quality of fused images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fusion method based on self-learning neural unit
  • Image fusion method based on self-learning neural unit
  • Image fusion method based on self-learning neural unit

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0071] An embodiment of the present invention (IR-VIS infrared and visible light image) will be described in detail below in conjunction with the accompanying drawings. This embodiment is carried out under the premise of the technical solution of the present invention, such as figure 1 As shown, the detailed implementation and specific operation steps are as follows:

[0072] Step 1. Perform Mask R-CNN processing on the infrared and visible light images to obtain the corresponding mask image, mask matrix, category information, and score information. According to the correctness and needs of the classification information judged subjectively by the human eye, the mask matrix whose category information of the infrared image is "person" is obtained.

[0073] Step 2, build an autoencoder network, and use the convolutional neural network (CNN) to select, fuse, and reconstruct image features. The autoencoder network consists of three parts: encoding layer, fusion layer and decoding...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image fusion method based on a self-learning neural unit, and belongs to the field of image fusion. The method comprises the following implementation steps: 1) enabling a fused image to enter Mask R-CNN network for processing to obtain a corresponding mask image, a mask matrix, category information and score information; 2) constructing a self-coding network, and performing image feature selection, fusion and reconstruction by using a convolutional neural network (CNN); 3) carrying out sparse assignment on the convolution weight of the fusion layer, and adding minimum / maximum norm weight constraint and an L1 regular term; 4) respectively calculating the overall structure similarity SSIM, the regional structure similarity SSIM and the mutual information MI of thefused image and the source image; 5) training the neural network and adjusting parameters. According to the image fusion method based on the neural network, horizontal measurement and weight distribution can be obtained through learning network parameters in an optimal mode in a combined mode, the image definition is enhanced, the visual effect is improved, and the quality of a fused image is improved.

Description

technical field [0001] The invention belongs to the field of image fusion, and relates to an image fusion method based on self-learning neural unit, which is widely used in military, remote sensing, and computer fields. Background technique [0002] With the rapid development of image fusion technology and its wide application in military, remote sensing and other markets, multi-source image fusion technology has attracted the attention of researchers. In the past few years, various image fusion methods have been proposed. The traditional MST fusion method based on multi-scale transformation processes images in a multi-scale manner in the transform domain, specifically including solving the transform domain corresponding to the source image, fusing images according to artificially designed fusion rules, and inverse transforming to obtain the final image. Fusion methods include fusion methods based on Laplace Transform LAP (Laplace Transform), fusion algorithms based on non-...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/253Y02T10/40
Inventor 罗晓清张战成刘子闻张宝成
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products