Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Variable-length input super-resolution video reconstruction method based on deep learning

A technology of super-resolution and deep learning, which is applied in the field of variable-length input super-resolution video reconstruction based on deep learning, and can solve problems such as inaccurate alignment of long input image sequences

Active Publication Date: 2020-08-11
CHANGAN UNIV
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The use of variable-length input sequences solves the problem of inaccurate alignment of long input image sequences in video super-resolution tasks; the use of gradual alignment fusion networks can align and fuse any number of adjacent frames without affecting subsequent reconstruction tasks, making it more practical powerful

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Variable-length input super-resolution video reconstruction method based on deep learning
  • Variable-length input super-resolution video reconstruction method based on deep learning
  • Variable-length input super-resolution video reconstruction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0073] In order to describe in detail the technical content, operation flow, achieved purpose and effect of the present invention, the following embodiments are given.

[0074] A variable-length input super-resolution video reconstruction method based on deep learning includes the following steps:

[0075] Step 1. Construct training samples of random length and obtain training set;

[0076] Exemplarily, the process of obtaining training samples of random length:

[0077] First, given the input sequence length K, K>0; select the data set;

[0078] Secondly, given the target frame to be reconstructed;

[0079] Finally, select the x frame image on the left side of the target frame and the K-1-x frame image on the right side of the target frame, and arrange the K frame images in order from left to right to obtain the input image sequence;

[0080] Among them, x is an integer randomly obtained by uniform distribution, and x=0, 1,..., K-1.

[0081] The length of the input sequence in the present...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a variable-length input super-resolution video reconstruction method based on deep learning. The method comprises the following steps: constructing a training sample with a random length, and obtaining a training set; establishing a super-resolution video reconstruction network model, wherein the super-resolution video reconstruction network model comprises a feature extractor, a gradual alignment fusion module, a depth residual error module and a superposition module which are connected in sequence; training the super-resolution video reconstruction network model by adopting the training set to obtain a trained super-resolution video reconstruction network; and sequentially inputting to-be-processed videos into the trained super-resolution video reconstruction network for video reconstruction to obtain corresponding super-resolution reconstructed videos. According to the method, a gradual alignment fusion mechanism is adopted, alignment and fusion can be carried out frame by frame, and alignment operation only acts on two adjacent frames of images, so that the model can process a longer time sequence relationship, more adjacent video frames are used, that is to say, more scene information is contained in input, and the reconstruction effect can be effectively improved.

Description

Technical field [0001] The invention belongs to the technical field of video restoration, and in particular relates to a method for reconstructing super-resolution video with variable length input based on deep learning. Background technique [0002] Most applications based on images and videos depend on the quality of the image. In general, the quality of an image is related to the amount of information it contains. The image resolution is used to measure the amount of information contained in an image. It is expressed by the number of pixels per unit area, such as 1024× 768. It can be seen that the resolution of the image represents the quality of the image, so in real life and application scenarios, high resolution becomes the quality requirement of images and videos. [0003] However, when the video contains complex motions such as occlusion, severe blur, and large offsets, it is necessary to reconstruct the video to obtain high-quality video information. In order to effecti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T3/40G06T3/60G06T5/50G06N3/08G06N3/04
CPCG06T3/4053G06T5/50G06T3/60G06N3/08G06T2207/20221G06N3/045Y02T10/40
Inventor 任卫军丁国栋黄金文张力波
Owner CHANGAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products