Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video super-resolution method based on multi-frame attention mechanism progressive fusion

A technology of super-resolution and attention, applied in the direction of instruments, biological neural network models, graphics and image conversion, etc., can solve the problems of difficult estimation of accurate flow information, sensitivity of deformable convolution input, and affecting the quality of video reconstruction, etc. Achieve the effects of reducing fusion difficulty, accelerating convergence, and improving super-resolution efficiency

Active Publication Date: 2021-06-18
SOUTH CHINA UNIV OF TECH
View PDF3 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method needs to deal with two relatively independent problems of estimating optical flow information and high-resolution image reconstruction, where the estimation accuracy of optical flow seriously affects the quality of video reconstruction, and optical flow estimation itself is also a challenging task. , especially in the case of large motion scenes, accurate flow information is difficult to estimate
[0008] The third type of method is to use Deformable Convolution (Deformable Convolution) network to deal with video super-resolution tasks, such as in DUF and TDAN through hidden motion compensation to solve the problem of optical flow estimation and the effect is beyond the estimation based on flow information. method, but the deformable convolution used in this type of method is sensitive to the input, and it is easy to generate obvious reconstruction artifacts due to unreasonable bias
[0009] It can be seen that there are deficiencies in the existing video super-resolution methods, how to effectively improve the effect and efficiency of video super-resolution is a technical problem that needs to be solved at present

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution method based on multi-frame attention mechanism progressive fusion
  • Video super-resolution method based on multi-frame attention mechanism progressive fusion
  • Video super-resolution method based on multi-frame attention mechanism progressive fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0078] This embodiment provides a video super-resolution method based on multi-frame attention mechanism progressive fusion, such as figure 1 shown, including the following steps:

[0079] S1. Video decoding: use the ffmpeg tool to extract frames from the video data set and save them as pictures to generate a training set.

[0080] Here, the video dataset contains high-resolution videos and low-resolution videos with the same video content. High-resolution videos refer to videos that reach the target resolution, and low-resolution videos refer to videos that are lower than the target resolution.

[0081] All the frames of the high and low resolution videos are reserved, and each low resolution video image has a corresponding high resolution video image to form the initial training set; the initial training set has N pairs of images: {(x 1L ,x 1H ),(x 2L ,x 2H ),…,(x NL ,x NH)}, where x NL Represents the low-resolution video image in the Nth pair of images; x NH Represe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video super-resolution method based on a multi-frame attention mechanism progressive fusion. The method comprises the following steps: firstly, performing frame extraction on a video data set to generate a training set; then connecting a multi-frame attention mechanism progressive fusion module, a feature extraction module and a reconstruction module to construct a video super-division network, and then utilizing a low-redundancy training strategy to train the network on a training set, that is, only learning a target frame, and only using front and back frames as auxiliary information instead of the target frame for training to greatly improve the learning efficiency; and finally, reconstructing a to-be-amplified video by using the trained video super-division model to finally obtain a high-resolution video. The method can make full use of the information of the front and back frames to help the reconstruction of the target frame, and effectively improves the video super-resolution effect.

Description

technical field [0001] The present invention relates to the field of image super-resolution (SISR) technology and video super-resolution (VSR) technology based on deep learning, in particular to a video super-resolution method based on progressive fusion of multi-frame attention mechanism. Background technique [0002] The image super-resolution (SISR) technology based on deep learning mainly uses convolutional neural network (CNN) as the learning model, and learns high-frequency information such as missing texture details of low-resolution images through a large amount of data, and realizes low-resolution images to high-resolution images. Resolution image end-to-end conversion. Compared with the traditional interpolation method, the deep learning method shows great advantages, and has achieved significant improvement in PSNR, SSIM and other effect evaluation indicators. In recent years, a large number of excellent image super-resolution based on deep learning have emerged. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06N3/04
CPCG06T3/4053G06N3/045
Inventor 刘文顺王恺
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products