Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Full-reference virtual-reality video quality evaluation method based on convolutional neural networks

A convolutional neural network and virtual reality technology, applied in the field of virtual reality video quality evaluation, can solve the problems of no VR video standard and objective evaluation system, and achieve the effect of simple video preprocessing, easy operation, and accurate reflection

Inactive Publication Date: 2018-08-24
TIANJIN UNIV
View PDF3 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Since virtual reality technology has just emerged in recent years, there is no standard and objective evaluation system for VR video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Full-reference virtual-reality video quality evaluation method based on convolutional neural networks
  • Full-reference virtual-reality video quality evaluation method based on convolutional neural networks
  • Full-reference virtual-reality video quality evaluation method based on convolutional neural networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] In order to make the technical solution of the present invention clearer, the specific implementation manners of the present invention will be further described below. The present invention is concretely realized according to the following steps:

[0018] Step 1: Construct difference video V according to the principle of stereo perception d . First grayscale each frame of the original VR video and the distorted VR video, and then use the left video V l with the right video V r Get the required difference video. Compute the sum value video V at the video position (x, y, z) d The value of is shown in formula (1):

[0019] V d (x,y,z)=|V l (x,y,z)-V r (x,y,z)|(1)

[0020] Step 2: Divide the VR differential video into blocks to form video patches, thereby expanding the capacity of the dataset. Specifically, one frame is extracted every eight frames from all VR difference videos, and a total of N frames are extracted. A square image block with a size of 32×32 pixe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a full-reference virtual-reality video quality evaluation method based on convolutional neural networks. The method comprises the steps that video preprocessing is conducted,wherein VR differential videos are obtained by means of a left view video and a right view video of a VR video, frames are uniformly extracted from the differential videos, each frame is cut into blocks in a non-overlapping mode, and a VR video patch is formed by the video blocks at the same location of each frame; two convolutional neural network models with the same configuration are built; theconvolutional neural network models are trained, wherein by means of a gradient descent method and by taking the VR video patches as the input, each patch is accompanied by an original video quality score to serve as a tag, the tags are input to the networks in batches, the weights of various layers of the networks are fully optimized after multiple times of iteration, and finally the convolutional neural network models capable of being used for extracting virtual-reality video features are obtained; the features are extracted by means of the convolutional neural networks; and local scores areobtained by means of a support vector machine, and a final score is obtained by adopting a score fusion strategy. According to the full-reference virtual-reality video quality evaluation method, theaccuracy rate of an objective evaluation method is increased.

Description

technical field [0001] The invention belongs to the field of video processing and relates to a virtual reality video quality evaluation method. Background technique [0002] As a new simulation and interaction technology-virtual reality (VR) technology is used in many fields such as architecture, games and military, it can create a virtual environment consistent with the rules of the real world, or create a simulation completely out of reality environment, which will bring people a more realistic audio-visual experience and on-the-spot experience [1]. As an important carrier of virtual reality, panoramic stereoscopic video is currently closest to the definition of VR video, which plays a huge role. However, in the process of capturing, storing, and transmitting VR videos, due to equipment and processing methods, some distortion will inevitably be introduced, which will affect the quality of VR videos. Therefore, it is very important to study an evaluation method that can e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N17/00H04N13/106G06N3/02
Inventor 杨嘉琛刘天麟
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products