Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal image fusion method based on convolution analysis operator

A multi-modal image and fusion method technology, applied in the field of image fusion, can solve the problem that a single modal image cannot express all the information of the scene well, and achieve the effect of avoiding artifacts and excessive fusion and improving the reconstruction quality

Pending Publication Date: 2020-11-06
四川警察学院
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the different physical imaging mechanisms, the information of a single modality image cannot better express all the information required by the scene.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal image fusion method based on convolution analysis operator
  • Multi-modal image fusion method based on convolution analysis operator
  • Multi-modal image fusion method based on convolution analysis operator

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples.

[0033] The present invention utilizes the Convolutional Analytical Operator Learning (CAOL) framework with orthogonal constraints proposed by I.Y.Chun to enforce tight frame (TF) filtering in the convolutional perspective, and converts the USC-SIPI image dataset (50 512×512 standard image) is applied to the dictionary learning framework (abbreviated as CAOL-TF) to obtain compact and diverse dictionary filtering (the dictionary filter size obtained in the present invention is 11×11×100), and the dictionary learning is expressed as follows.

[0034]

[0035]

[0036] D:=[d 1 ,...,d K ] (3)

[0037] in, Represents a convolution operator; Represents a set of convolution kernels; α is a threshold parameter that controls feature sparsity; Repres...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal image fusion method based on a convolution analysis operator. The method comprises the following steps of 1, decomposing a source image through fast Fourier transform, and respectively obtaining a low-frequency component and a high-frequency component; 2, fusing low-frequency components; 3, fusing high-frequency components; and 4, reconstructing an image according to the fusion structure of the low-frequency component and the fusion result of the high-frequency component. The method is advantaged in that the image features are better expressed, the reconstruction quality of the fused image is obviously improved, and an edge in the reconstructed image is better reserved.

Description

technical field [0001] The invention relates to the technical field of image fusion, in particular to a multimodal image fusion method based on a convolution analysis operator. Background technique [0002] Multimodal images are widely used in many occasions, such as station security checks and medical diagnosis. However, due to the different physical imaging mechanisms, the information of a single modality image cannot better express all the information required by the scene. Therefore, in order to obtain more detailed information, multi-modal image fusion technology has achieved greater success to make up for the lack of imaging machines. Moreover, the obtained fused image can facilitate target monitoring, recognition and tracking in multi-modal scenes in the later stage. The images obtained by the multi-modal image fusion algorithm based on multi-scale transformation have partial artifacts or over-fusion, and the multi-scale transformation is restricted by the number of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50
CPCG06T5/50G06T2207/20056G06T2207/20221G06T2207/20081G06T2207/10048G06T2207/30016
Inventor 张铖方
Owner 四川警察学院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products