Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image super-resolution reconstruction method based on self-attention high-order fusion network

A technology of super-resolution reconstruction and network fusion, applied in the field of intelligent image processing, to achieve the effect of increasing diversity, enhancing expression ability, and solving additional calculation load

Active Publication Date: 2019-06-07
GUILIN UNIV OF ELECTRONIC TECH
View PDF12 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method can effectively solve the extra calculation caused by preprocessing, and can restore more texture details to reconstruct high-quality images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image super-resolution reconstruction method based on self-attention high-order fusion network
  • Image super-resolution reconstruction method based on self-attention high-order fusion network
  • Image super-resolution reconstruction method based on self-attention high-order fusion network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0041] refer to figure 1 , an image super-resolution reconstruction method based on a high-order fusion network of self-attention, including the following steps:

[0042] 1) Establish a reconstruction model: the reconstruction model includes a serial convolutional neural network and a self-attention module, such as figure 2 As shown, the convolutional neural network is equipped with a residual unit and a deconvolution layer, and the self-attention module includes a parallel attention branch and a backbone branch, and the output of the attention branch and the backbone branch is fused as a feature High-level fusion, which generates high-resolution images from low-resolution images through reconstruction models;

[0043] 2) CNN network feature extraction: directly use the original low-resolution image as the input of the CNN network established in step 1), and the output of the CNN network is a coarse-precision high-resolution feature;

[0044] 3) Feature extraction of the se...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image super-resolution reconstruction method based on a self-attention high-order fusion network, and the method is characterized by including the following steps: 1), building a reconstruction model; 2) performing CNN network feature extraction; (3) performing self-attention branch feature extraction in a self-attention module, (4) performing trunk branch feature extraction in the self-attention module, (5) performing feature high-order fusion and (6) performing image reconstruction. The method can effectively solve the problem of extra calculated amount caused by preprocessing, and more texture details can be recovered to reconstruct a high-quality image.

Description

technical field [0001] The invention relates to the technical field of intelligent image processing, in particular to an image super-resolution reconstruction method based on a self-attention high-order fusion network. Background technique [0002] Recently, significant advances in deep learning in computer vision have impacted the field of super-resolution. Single image super-resolution is an ill-posed inverse problem, which aims to recover a high-resolution (High-Resolution, HR) image from a low-resolution (Low-Resolution, LR) image. Typical current approaches construct high-resolution images by learning a nonlinear LR-to-HR mapping. Dong et al. first introduced a three-layer convolutional neural network (CNN for short) for image super-resolution, and proposed a super-resolution reconstruction method based on convolutional neural network, which learns in an end-to-end manner. Non-linear mapping relationship between LR to HR. Thanks to the emergence of the residual netwo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T3/40G06K9/62
Inventor 林乐平梁婷欧阳宁莫建文袁华首照宇张彤陈利霞
Owner GUILIN UNIV OF ELECTRONIC TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products