Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

No-reference video quality evaluation method based on short-time space-time fusion network and long-time sequence fusion network

A spatiotemporal fusion and fusion network technology, applied in neural learning methods, biological neural network models, television, etc., can solve problems such as poor evaluation performance, and achieve the effect of accurate video scores

Pending Publication Date: 2021-12-10
COMMUNICATION UNIVERSITY OF CHINA
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] Aiming at the poor performance of no-reference video quality evaluation in the existing video quality evaluation, the present invention proposes an objective no-reference quality evaluation method. The present invention divides the video into video frames, and each video frame is obtained through a short-time spatio-temporal fusion network The 64-dimensional feature vector of each video frame and the preliminary predicted quality score are then combined into a feature sequence according to the time sequence of the feature vector, and the preliminary predicted quality score is converted into an impact factor between frames under the guidance of prior knowledge. Then use them as the input of the long-term sequence fusion network to get the overall quality score of the video and complete the quality evaluation process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • No-reference video quality evaluation method based on short-time space-time fusion network and long-time sequence fusion network
  • No-reference video quality evaluation method based on short-time space-time fusion network and long-time sequence fusion network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0049] The flow chart of the implementation is as figure 1 shown, including the following steps:

[0050] Step S10, obtaining a video frame from the video;

[0051] Step S20, building and training a short-time spatio-temporal fusion network;

[0052] Step S30, obtaining the feature sequences of several video segments and the mutual influence factors of each frame in the video segment;

[0053] Step S40, building and training a long-term sequence fusion network;

[0054] Step S50, evaluating the quality of the video;

[0055] Obtaining the video frame adjustment step S10 from the video of the embodiment also includes the following steps:

[0056] Step S100, extracting video frames, converting the complete video sequence from formats such as YUV to BMP format, and saving it frame by frame;

[0057] Step S110, sampling video frames, selecting video frames at intervals of 4, and discarding other video frames directly due to redundancy.

[0058] Step S120, generating a lumina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a non-reference video quality evaluation method based on a short-time space-time fusion network and a long-time sequence fusion network. Quality prediction of video frames is realized through two networks for calculating different time lengths in sequence. And the short-time space-time fusion network is used for extracting and fusing the space-time characteristics of the current frame to obtain a result after the time characteristics in the current frame act on the space characteristics. And the long-time sequence fusion network is used for modeling a result of interaction between frames within a period of time and predicting a quality score of the video under the guidance of priori knowledge. According to the method, video frames serve as input, a network is designed on the frame level to fuse time and space features, the inter-frame relation is considered on the sequence level to further refine the features of the current frame, the quality of each frame in a video is predicted through deep learning, and finally the task of evaluating the overall quality of the video is completed. According to the method, the characteristics of the video frames are refined and enriched, so the model performance is remarkably improved.

Description

technical field [0001] The invention relates to a non-reference video quality evaluation method based on a short-time spatio-temporal fusion network and a long-time sequence fusion network, and belongs to the technical field of digital video processing. Background technique [0002] As a complex source of visual information, video contains a lot of valuable information. The quality of video directly affects people's subjective feelings and information acquisition, and can guide other video tasks such as related equipment development, system monitoring, and quality restoration. Research on Video Quality Assessment (VQA) has also been carried out in recent years. years have received extensive attention. [0003] Video quality evaluation methods can be divided into subjective evaluation methods and objective evaluation methods. Subjective evaluation is the subjective evaluation of video quality by observers. Although the evaluation results are in line with people's subjective...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N17/00G06N3/04G06N3/08
CPCH04N17/00G06N3/049G06N3/08G06N3/045
Inventor 史萍王雪婷潘达
Owner COMMUNICATION UNIVERSITY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products