General video time domain alignment method based on neural network

A neural network and time-domain technology, applied in the direction of neural learning methods, biological neural network models, neural architectures, etc., can solve problems such as inconsistencies in video time domains and time domains, and achieve a sense of real experience, reduce R&D costs and applications Cost, superior image processing capabilities, and the effect of capturing temporal and spatial correlations

Pending Publication Date: 2021-05-18
福建帝视信息科技有限公司
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In view of this, the purpose of the present invention is to propose a general video time domain alignment method based on neural network, which solves the problem of inconsistency in the video time domain caused by directly applying the image processing model to the video task, and has the ability to solve the problem with a general algorithm Advantages of Temporal Inconsistency Problems for Many Different Video Tasks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • General video time domain alignment method based on neural network
  • General video time domain alignment method based on neural network
  • General video time domain alignment method based on neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0047] It should be pointed out that the following detailed description is exemplary and is intended to provide further explanation to the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

[0048] It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and / or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and / or combina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a general video time domain alignment method based on a neural network. The general video time domain alignment method comprises the following steps: acquiring all original video image frames of a current video; processing the original video image frame through an image processing neural network model to obtain a processed image frame; constructing a deep convolutional neural network which can be used for aligning video image inter-frame time domains; adopting the original video image frame and the processed image frame as input, and obtaining an output video image frame with aligned time domains through the deep convolutional neural network; and synthesizing the output time domain aligned video image frames to obtain a final time domain aligned complete video. According to the method, the problem of video time domain inconsistency generated by directly applying an image processing model to a video task is solved, and the method has the advantage that the problem of time domain inconsistency of various different video tasks is solved by using a general algorithm.

Description

technical field [0001] The invention relates to the technical field of video image processing, in particular to a neural network-based general video temporal alignment method. Background technique [0002] In recent years, with the rapid development of computer vision, various image processing tasks based on neural networks have made major breakthroughs, and many image processing algorithms have shown excellent performance in a single image processing task. For example, image denoising tasks, through a well-designed neural network, can generate a clear image without noise from an image full of noise, achieving excellent visual effects. In practical applications, video scenes are more in line with actual needs. Most video tasks are processed by extracting video into several image frames, and then processing them frame by frame with the image processing neural network model, and finally processing the processed image frames Synthesize to get the processed video. The video qu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50G06T5/00G06K9/62G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/10016G06T2207/20081G06T2207/20084G06N3/044G06N3/045G06F18/22G06T5/73G06T5/70
Inventor 陈弘林李茹谢军伟童同高钦泉罗鸣
Owner 福建帝视信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products