Video stabilization method based on iterative strategy of recurrent neural network

A technology of cyclic neural network and video stabilization, applied in the field of remote sensing image processing, can solve the problem of not being able to make good use of timing information

Active Publication Date: 2020-11-06
NANJING UNIV OF SCI & TECH
View PDF3 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these deep video stabilization methods only stack adjacent temporal video frames in the input channel dimension, and then design a temporal regularization term to allow the convolutional network to learn the coherence of motion between frames, but this method cannot be well utilized. Timing information of adjacent frames

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video stabilization method based on iterative strategy of recurrent neural network
  • Video stabilization method based on iterative strategy of recurrent neural network
  • Video stabilization method based on iterative strategy of recurrent neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] The present invention combines remote sensing image processing technology with deep learning to provide a video stabilization method based on a cyclic neural network iterative strategy to achieve stabilization of shaking sequence images and improvement of picture quality. The cyclic neural network can transmit the motion state between video frames in a long-term sequence, and provide a reference for the current frame distortion, making the stabilized picture more coherent and clear. The idea of ​​this method is simple and clear, avoiding the unreal jitter artifacts caused by the loss of the timing relationship between frames, and updating the learned hidden state through the iterative strategy of the recurrent neural network, thereby effectively improving the stability effect.

[0048] combine figure 1 , detail the main process steps of the inventive method:

[0049] Step 1: Use a jitter video acquisition and stabilization processing hardware device to obtain paired vi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video stabilization method based on a iterative strategy of a recurrent neural network. The method comprises the following steps: capturing paired video data by using a jitter video acquisition and stabilization processing hardware device; preprocessing an acquired video stream sample; designing and constructing an end-to-end full convolutional deep neural network based on an intra-frame and inter-frame iterative strategy of the recurrent neural network; inputting the preprocessed training data into a recurrent neural network, and guiding a training process of networkparameters by using linear weighting of four losses to obtain a trained model; and inputting the low-quality jitter test video into the trained neural network to obtain a stable version of the targetvideo. According to the invention, the historical motion state information is transmitted to each current video frame through the iterative strategy of the recurrent neural network in the aspect of time sequence, and the perception capability of the network for the jitter sequence frame information is enhanced, so that a stable picture can be predicted more accurately.

Description

technical field [0001] The invention belongs to the technical field of remote sensing image processing, in particular to a video stabilization method based on an iterative strategy of a cyclic neural network. Background technique [0002] Remote sensing hyperspectral image super-resolution is a widely used and popular research field. Video is a time-series combination expression of images. Many video processing algorithms are not robust to some low-quality videos (blurry, noise, picture jitter, insufficient light), so video quality is the key to testing the performance of video processing algorithms. . The video image stabilization can be used as a preprocessing step of these algorithms to further improve the algorithm performance by improving the video picture quality. The stabilized video can be better applied to various visual tasks such as super-resolution and classification. [0003] The traditional mainstream video image stabilization method is an image processing m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/232H04N19/139H04N19/557G06N3/04G06N3/08
CPCH04N19/139H04N19/557G06N3/08H04N23/64H04N23/682G06N3/044G06N3/045
Inventor 李恒谢浩鹏肖亮
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products