Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)

A multi-focus image and fusion method technology, which is applied in image enhancement, image data processing, instruments, etc., can solve the problems of large approximation error, large fusion rules, and many parameters, so as to achieve good image singularity and strengthen self-adaptiveness , the effect of a simple network structure

Inactive Publication Date: 2012-10-10
INNER MONGOLIA UNIV OF SCI & TECH
View PDF4 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantage of this technology is that the Contourlet transform is carried out in the discrete domain, and the sampling process does not have translation invariance, which will produce pseudo-Gibbs effect and affect the fusion effect. The application object is multi-spectral image, which is not suitable for multi-focus image fusion.
[0005] After searching the existing technologies, it was found that Li Meili of Northwestern Polytechnical University and others proposed "Infrared and visible light image fusion method based on NSCT and PCNN ("Optoelectronic Engineering" 2010 No. 6: 90-95), using non-subsampling Contourlet The transformation decomposes the registered source image to obtain low-frequency sub-band coefficients and each band-pass sub-band coefficient, and proposes an improved PCNN-based image fusion method for each band-pass sub-band coefficient to determine the bands of the fused image Pass subband coefficients; finally, the fused image is obtained through non-subsampling Contourlet inverse transformation. This method is superior to Laplacian method, wavelet method and non-subsampling Contourlet transformation method, which proves that it is feasible to use non-subsampling Contourlet transformation and PCNN for image fusion. However, the disadvantages of this technology are: the PCNN model is complex, there are many parameters, and the calculation takes a long time. The fusion objects are different spectral images containing the same content, which cannot be directly applied to the fusion of multi-focus images.
However, the disadvantages of this technology are: the PCNN model is complex, there are many parameters, and the calculation takes a long time. The fusion rule is based on the comparison of the coefficients, and the fusion coefficient corresponding to each pixel of the fusion image only reflects the information of one of the source images, and the information of the other image. The impact is not considered, this method is not suitable for fusion of brighter or darker images
However, the disadvantage of this technology is that wavelet transform is only effective in processing one-dimensional segmented smooth signals, but for two-dimensional natural images, which contain a large number of texture features and line singularity is prominent, wavelet transform is not the optimal representation method
Because the sparsity of wavelet expansion coefficients is not ideal, large approximation errors are generated, and the sampling process does not have translation invariance, which will produce pseudo-Gibbs effects and affect the fusion effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
  • Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
  • Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] Such as figure 1 As shown, this embodiment includes the following steps:

[0028]Step 1: Left focus the original image I , which has been registered to reflect the same content A and right focus the original image I B Perform non-subsampled Contourlet transform respectively to obtain the directional subband coefficient image in the stationary Contourlet transform domain;

[0029] In the described non-subsampling Contourlet transform: the scale decomposition filter adopted is realized by the CDF9 / 7 tower wavelet filter, the direction decomposition filter adopted is the pkva direction filter, and the scale decomposition of two layers is carried out to the original image to obtain the low The pass component image and the bandpass component image, that is, the low frequency sub-image I A-lf and I B-lf and high-frequency sub-images and Among them: the first layer has 4 direction sub-bands, and the second layer has 8 direction sub-bands, where: k is the number of laye...

Embodiment 2

[0052] The method of embodiment 1 and embodiment 2 are the same, but the experimental images are different.

[0053] In summary, through image 3 , Figure 4 From the comparison of the effects, it can be seen that this method better integrates the respective information of the multi-focus image, not only effectively enriches the background information of the image, but also protects the details of the image to the greatest extent, which is in line with the visual characteristics of the human eye. Therefore, in terms of fused images being faithful to the real information of the source image, the method of the present invention is obviously higher than the fusion effect based on Laplace pyramid transformation, dual-channel PCNN, and PCNN.

[0054] Such as image 3 (c), (d), (e), (f) and Table 1 list the objective evaluation indicators of the fusion results of the four methods.

[0055] Table 1 Comparison table of experimental results

[0056]

[0057] Such as Figure 4 (...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-focus image fusing method based on a dual-channel PCNN (Pulse Coupled Neural Network), which belongs to the technical field of image processing. The method comprises the following steps of: performing NSCT (Non-Subsampled Contourlet Transform) on two images respectively to obtain a plurality of sub-images of different frequencies; fusing by correspondingly adopting the dual-channel PCNN, and determining each band pass sub-band coefficient of a fused image; and performing reverse NSCT to obtain the fused image. Due to the adoption of the multi-focus image fusing method, the defects of the conventional multi-focus image fusing method are overcome, and the fusing effect is improved.

Description

technical field [0001] The invention relates to a method in the technical field of image processing, in particular to a non-subsampling Contourlet Transform (NSCT) multi-focus image fusion method based on a dual-channel PCNN (Pulse Coupled Neural Networks, Pulse Coupled Neural Networks). Background technique [0002] Different types of optical equipment, due to their limited depth of field, cause their images of different objects in the same target area to produce images with different focal lengths due to different focal lengths, so the clear areas of these obtained images are also different. , all objects in the image cannot achieve the same degree of clarity, and the expression of information in any image is incomplete, but these images show different emphases of the same scene, so there is information that complements each other. By fusing the focus areas of different images, the generated image has more complete information content. [0003] Multi-focus images are obta...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06N3/02
Inventor 张宝华吕晓琪王月明
Owner INNER MONGOLIA UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products