Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-level feature fused generative adversarial network image defogging method

A network image, generative technology, applied in the field of image processing, can solve the problems of dull color, easy to fail calculation in the sky area of ​​the image, easy distortion of the image edge, etc. The effect of robustness

Pending Publication Date: 2022-03-04
MINJIANG UNIV
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The classic traditional defogging method is mainly based on the construction of the atmospheric scattering model to obtain the defogged image, but the overall color of the defogged image is dark and the image edge is easily distorted, or the sky area in the image is prone to failure and heavy calculation
Due to the obvious human interference factors in the calculation of parameters by the traditional method, the dehazing effect is not ideal. In recent years, the dehazing method based on deep learning has begun to rise.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-level feature fused generative adversarial network image defogging method
  • Multi-level feature fused generative adversarial network image defogging method
  • Multi-level feature fused generative adversarial network image defogging method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The technical solution of the present invention will be specifically described below in conjunction with the accompanying drawings.

[0035] The present invention is a generative confrontation network image defogging method that integrates multi-level features, embeds the structure of U-Net into the generator of the generative confrontation network, and proposes a generative confrontation network that integrates multi-level features, including for The generator that generates the defogged image, the discriminator used to discriminate the defogged image and the label image and feed back the result to the generator; Feature extraction is performed separately, and the extracted different feature maps are learned through the SE-ResNet module; then, the learned multi-level feature maps are spliced ​​to fuse more image features; then, when upsampling, the fused The feature map is put into the SE module for learning to better allocate channel weights and enhance useful feature...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a generative adversarial network image defogging method fusing multi-level features. The method is based on an end-to-end defogging algorithm of deep learning, during down-sampling, feature extraction is carried out on foggy images with different resolutions, the extracted different feature maps are learned through an SE-ResNet module, information between channels is better fitted, and performance degradation caused by network deepening is prevented. And splicing the learned multi-level feature maps, and fusing more image features. And then, during up-sampling, putting a feature map after down-sampling fusion into an SE module for learning so as to better distribute channel weights and enhance useful features. And splicing the learned feature map with the up-sampling feature map, and fusing more image information. And finally, adding the residual image of network learning and an input foggy image to obtain a final image defogging result. Experimental results show that the image defogging method provided by the invention is better in defogging performance.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to a generative adversarial network image defogging method that integrates multi-level features. Background technique [0002] In foggy weather, the irradiance received by the camera from the scene attenuates along the line of sight, and the turbid medium in the atmosphere will absorb the light of the scene image, causing the captured image to lose contrast and color fidelity, reducing the quality of the image, and directly affecting Tasks such as video surveillance, remote sensing images, automatic navigation and target recognition. Therefore, by designing an algorithm model to eliminate the adverse effects of haze factors in the image, and restore the foggy image to a clear image, people can obtain more information they need from the image. Existing image defogging algorithms are mainly divided into two categories, namely, traditional image defogging methods ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/00G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/04G06N3/08G06T2207/20081G06T2207/20084G06T2207/20221G06T5/73
Inventor 李佐勇冯婷蔡远征郑祥盘曾坤
Owner MINJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products