Stereoscopic video quality evaluation method based on 3D convolution neural network

A convolutional neural network and stereoscopic video technology, applied in the field of video processing, can solve the problem of unavailability of reference video, and achieve the effect of high computational efficiency

Active Publication Date: 2018-06-29
TIANJIN UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, reference videos are not available in most practical applications, and only reference-free methods are likely to meet practical needs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Stereoscopic video quality evaluation method based on 3D convolution neural network
  • Stereoscopic video quality evaluation method based on 3D convolution neural network
  • Stereoscopic video quality evaluation method based on 3D convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] 1. Data preprocessing

[0020] (1) Difference video:

[0021] Calculate the difference video between the left view and the right view at the stereoscopic video position (x, y, z), the calculation formula is as follows:

[0022] D. L (x,y,z)=|V L (x,y,z)-V R (x,y,z)| (1)

[0023] where V L and V R denoted as left and right views at stereoscopic video position (x, y, z), respectively, D L Represents the difference video.

[0024] (2) Dataset enhancement:

[0025] We slide a 32×32 box with a stride of 32, crop the entire video in the spatial dimension, and select frames with a stride of 8 in the temporal dimension, and derive many low-resolution images by dividing the original video in the spatial and temporal dimensions. rate of short video cubes. The size of each cubic video is set to 10×32×32, that is, 10 frames, and the resolution of each frame is 32×32. In this scheme, 32×32 rectangular boxes are cropped at the same position in 10 consecutive frames to gene...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a stereoscopic video quality evaluation method based on a 3D convolution neural network. The stereoscopic video quality evaluation method based on the 3D convolution neural network comprises the following steps: preprocessing data; training the 3D convolution neural network; and performing quality score fusion: dividing the whole test video into two parts randomly, whereinone part is used for the training of a 3D CNN model, and the other part is used for model test; and obtaining a prediction score of each input video block from the test stereoscopic video after the training process of the 3D CNN model. In order to obtain the overall evaluation score of the video, a quality score fusion strategy considering global time information is adopted: firstly, performing integration on the cube-level scores on the spatial dimension by means of the average pooling; defining the weight of each segment calculated based on the motion intensity to simulate global time information; then calculating the weight of the motion intensity of each time dimension in the total motion intensity of the stereoscopic video; and finally, summarizing the video-level prediction scores as the weighted sum of each time dimension to obtain stereoscopic video fusion quality score.

Description

technical field [0001] The invention belongs to the field of video processing and relates to a method for evaluating the quality of stereoscopic video. Background technique [0002] Today, there is a large amount of stereoscopic video in various fields such as entertainment and education. Visual quality is a basic and complex feature of stereoscopic video, which is highly related to the user's quality of experience; during the continuous production stages of stereoscopic video, including processing, compression, transmission and display, visual quality may be compromised to varying degrees. Therefore, the research on stereoscopic video quality assessment (Stereoscopic Video Quality Assessment, SVQA) plays an important role in the development of stereoscopic video system. In order to achieve higher efficiency and feasibility, non-subjective and automatic objective stereoscopic video quality assessment methods are highly desired. The subjective evaluation method is not only ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N17/00G06T7/00
CPCG06T7/0002G06T2207/10021G06T2207/20081G06T2207/20084G06T2207/30168H04N17/00
Inventor 杨嘉琛肖帅
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products