Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-focus image fusion method based on directional filter and deconvolutional neural network

A multi-focus image and directional filter technology, applied in the field of image processing, can solve the problems of inability to accurately extract multi-focus image detail information, unable to form multi-focus image image semantic expression, unable to accurately describe image detail information, etc. Achieve the effect of saving training time, reducing computational complexity, and simple design

Active Publication Date: 2022-03-04
XIDIAN UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The classic wavelet transform belongs to the method of change domain, and has achieved good results in fusion. When wavelet transform performs a wavelet transform on a two-dimensional digital image, it can obtain four components of the same size of the image, which are a smooth The low-frequency component of the low-frequency component and the high-frequency component in three different directions, because the random disturbance of this component is relatively large, when the image is reconstructed and fused, it is impossible to accurately describe the detailed information of the image, so the amount of information in the fused image is still very large promotion
[0005] In his master's thesis "Research on Image Fusion Algorithm Based on Deconvolutional Neural Network", Wang Qiangjun of Xidian University mentioned that deconvolutional neural network can be used as a multi-scale tool to decompose the feature map. This method is also a change. domain image processing method, but when designing the deconvolution neural network model, the initial Butterworth high-pass and low-pass filters are initialized to decompose and obtain the feature map, which cannot form a complete image semantic expression of the input multi-focus image, and cannot Accurately extract the detailed information of multi-focus images, which cannot meet the accuracy requirements of subsequent processing

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-focus image fusion method based on directional filter and deconvolutional neural network
  • Multi-focus image fusion method based on directional filter and deconvolutional neural network
  • Multi-focus image fusion method based on directional filter and deconvolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The embodiments and effects of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0037] refer to figure 1 , the present invention is based on the multi-focus image fusion method of directional filter (and deconvolution neural network), and the realization steps include as follows:

[0038] Step 1, design an initial high-pass directional filter and an initial Gaussian low-pass filter.

[0039] The high-pass directional filter has the function of accurately extracting the detailed information of the image in the corresponding direction, and the Gaussian low-pass has the function of smoothing the image and filtering noise. Their design methods are as follows:

[0040] (1.1) The design size is N, and the cutoff frequency is δ 1 The initial Gaussian low-pass filter:

[0041] According to the two-dimensional Gaussian function: The cutoff frequency is obtained as δ 1 The calculation formula of is as follows:

[0042...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-focus image fusion method based on a direction filter and a deconvolution neural network, which mainly solves the problems of low fusion precision and insufficient image indexes in the existing multi-focus image fusion method. Its implementation is as follows: select the multi-focus image data set to be fused; design the initial high-pass directional filter and low-pass filter for the data set; preset this filter group into the deconvolution neural network model; The network is trained by the method of constructing the image and the error of the input image; two images to be fused are input into the trained network, the feature maps of the two images are obtained respectively, and the feature maps of the two images are fused to obtain the fusion feature Figure; the fusion feature map and the specific filter bank are convolved and summed to obtain the final fusion image. The invention can improve the image fusion precision, and the filter group can be set in any size and direction, has high precision and wide application range, and can solve the problem of unclear images caused by out-of-focus multi-focus images.

Description

technical field [0001] The invention belongs to the technical field of image processing, and mainly relates to an image fusion method, which can be used for increasing the amount of information of a multi-focus image and obtaining image details. Background technique [0002] Digital image fusion is a technology that comprehensively processes multi-source images and image sequences in multi-measurement space. After resampling. In the process of obtaining a fusion image through different fusion rules. Compared with images in a single measurement space, image fusion integrates multiple measurement information of the same image, integrates advantageous image information under different measurements, complements each other's advantages, and obtains an image with more effective information, which greatly improves the image quality. The utilization of single measurement data improves the credibility of image processing, increases the robustness of fusion data, reduces the unrelia...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06T5/20
CPCG06T5/50G06T5/20G06T2207/20081G06T2207/20084G06T2207/20221
Inventor 那彦刘赫张志恒王浩
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products