Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Infrared-visible light image fusion method based on saliency map and convolutional neural network

A convolutional neural network, infrared image technology, applied in neural learning methods, biological neural network models, neural architectures, etc., to achieve good visual effects, enhance contrast, improve clarity and contrast.

Pending Publication Date: 2020-05-19
TIANJIN UNIV
View PDF0 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The present invention proposes an algorithm for infrared-visible light image fusion based on saliency map and convolutional neural network. The algorithm combines pixel-level fusion and feature-level fusion methods to improve the defects of a single fusion method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Infrared-visible light image fusion method based on saliency map and convolutional neural network
  • Infrared-visible light image fusion method based on saliency map and convolutional neural network
  • Infrared-visible light image fusion method based on saliency map and convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] specific implementation plan

[0022] In order to make the technical solution of the present invention clearer, the specific implementation of the present invention will be further described below in conjunction with the accompanying drawings. The specific implementation plan flow chart is as follows figure 1 shown.

[0023] 1) Take the pre-collected infrared images and their visible light images as a data set, and name each group of images. The naming format is *-1.png and *-2.png, corresponding to infrared images and visible light images respectively .

[0024] 2) Mean filtering is performed on each pair of source images respectively to obtain the low-frequency components and high-frequency components of each pair of source images. The low frequency components represent the integrated intensity of the entire image. The high-frequency component represents where the intensity of the entire image changes sharply, that is, the outline and details of the image. The la...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an infrared-visible light image fusion method based on a saliency map and a convolutional neural network, and the method comprises the following steps: 1), making a data set through a pre-collected infrared image and a visible light image thereof; 2) performing mean filtering processing on each group of infrared visible light images by using mean filters with different sizes of kernels so as to achieve decomposition of high and low frequency components of the source image; (3) fusing the low-frequency components, (4) carrying out feature extraction on the high-frequency components of different sources by using a deep convolutional neural network, and fusing infrared and visible light feature images obtained by each convolution layer through a weighting strategy, and (5) adding the low-frequency fusion image obtained in the step (3) and the high-frequency fusion image obtained in the step (4) to obtain a final fusion image.

Description

technical field [0001] The invention belongs to the fields of deep learning, computer vision and image fusion, and relates to an infrared-visible light image fusion method based on a saliency map and a convolutional neural network. Background technique [0002] With the development of sensor technology, the application of multiple sensors has increased the amount of information acquired by the system, and the traditional information processing method is inefficient; image fusion technology can perform multi-level comprehensive processing on the information obtained by different sensors, so as to obtain the most effective information, remove redundant information, and improve processing efficiency. In the field of image fusion, the most important branch is the fusion of infrared images and visible light images. Visible light images are reflective images that can reflect the details and textures of a scene under certain lighting conditions. Infrared images can capture electr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06N3/045
Inventor 侯春萍王霄聪杨阳夏晗
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products