Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-exposure image fusion method based on end-to-end deep learning framework

A deep learning and image fusion technology, applied in the field of image processing, can solve the problems of complex calculation and many method steps.

Active Publication Date: 2017-09-26
BEIJING UNION UNIVERSITY
View PDF1 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The method proposed in this application has many steps, complicated calculation, and can only realize the fusion of spectral images
Doesn't work with normal images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-exposure image fusion method based on end-to-end deep learning framework
  • Multi-exposure image fusion method based on end-to-end deep learning framework
  • Multi-exposure image fusion method based on end-to-end deep learning framework

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] The present invention will be further elaborated below in conjunction with the accompanying drawings and specific embodiments.

[0050] This application proposes to use the convolutional neural network to realize end-to-end multi-exposure fusion technology. The input of the convolutional neural network is a sequence of images with different exposures, and a high-quality fusion result image is directly obtained through the network. Through the network training process, the mapping relationship between images with different exposures and real scene images (standard illumination) can be obtained.

[0051] like figure 1 As shown, step 100 is executed, in order to learn an end-to-end mapping function F, it is necessary to obtain the parameter Θ through training, so as to obtain the parameter value W 1 , W 2 , W 3 , B 1 , B 2 and B 3 . In this embodiment, the parameter Θ is realized by optimizing the loss function, which is defined by the minimum square error between t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-exposure image fusion method based on an end-to-end deep learning framework. The method comprises steps as follows: a parameter theta is obtained through training; an original image is subjected to fusion processing on the basis of a convolutional neural network, and an output image is obtained; the original image is subjected to N downsampling, and N<2> original sub-images are obtained; the N<2> original sub-images are subjected to fusion processing respectively on the basis of the convolutional neural network, and N<2> output sub-images are obtained; the N<2> output sub-images are combined, and a combined sub-image is obtained; the output image and the combined sub-image are subjected to weighted average, and a result fusion image is obtained. On the basis of the deep learning framework, one end-to-end multi-exposure image fusion method is realized, the defect that a traditional method is just used for calculating fusion coefficients through the network is changed, and the complexity of the algorithm is greatly reduced.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a multi-exposure image fusion method under an end-to-end deep learning framework. Background technique [0002] There is a strong demand for high-quality images in daily life and work, such as photo editing, high-quality imaging of portable devices such as mobile phones, and medical image enhancement. Theoretically, high-quality images with rich detail information can be obtained by fusing multiple-exposure image sequences. Multi-exposure image fusion technology has become a research hotspot in the field of computer vision. The ultimate goal of the multi-exposure fusion algorithm is to make the resulting image displayed, and the perception obtained by human beings is the same as that obtained by being in the real environment, that is, the observed image and the real scene not only show the same information, but also give The visual perception brought by humans should a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06N3/08
CPCG06N3/084G06T5/50G06T2207/20221
Inventor 王金华何宁徐光美张敬尊张睿哲王郁昕
Owner BEIJING UNION UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products