Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for guiding multi-view video coding quantization process by visual perception characteristics

A multi-view encoding and multi-view video technology, applied in digital video signal modification, television, electrical components, etc.

Inactive Publication Date: 2013-05-29
SHANGHAI UNIV
View PDF5 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the JND model established by it is in the pixel domain, and lacks the process of removing the frequency redundancy of the human eye, resulting in the inaccurate guidance and quantization process.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for guiding multi-view video coding quantization process by visual perception characteristics
  • Method for guiding multi-view video coding quantization process by visual perception characteristics
  • Method for guiding multi-view video coding quantization process by visual perception characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0042] In this embodiment, the method of using visual perception characteristics to guide the quantization process of multi-viewpoint video coding, see figure 1 , including the following steps:

[0043] (1) Read the brightness value of each frame of the input video sequence, and establish a just discernable distortion threshold model in the frequency domain,

[0044] (2) Each frame of the input video sequence undergoes intra-viewpoint and inter-viewpoint prediction,

[0045] (3) Discrete cosine transform is performed on the residual data,

[0046] (4) Dynamically adjust the quantization step size of each macroblock in the current frame,

[0047] (5) Dynamically adjust the Lagrangian parameters in the rate-distortion optimization process,

[0048] (6) Entropy encoding is performed on the quantized data to form a code stream for transmission through the network.

Embodiment 2

[0049] Embodiment 2: This embodiment is basically the same as Embodiment 1, and the special features are as follows:

[0050] The establishment of the frequency domain JND model in the above step (1) includes four models, see figure 2 :

[0051] (1-1) The spatial contrast sensitivity function model is based on the bandpass characteristic curve of the human eye, for a specific spatial frequency Its basic JND threshold can be expressed as:

[0052]

[0053] spatial frequency The calculation formula is:

[0054]

[0055] in, and Indicates the coordinate position of the DCT transform block, is the dimension of the DCT transform block, and Indicates the horizontal and vertical viewing angles. It is generally believed that the horizontal viewing angle is equal to the vertical viewing angle, which is expressed as:

[0056]

[0057] Since the visual sensitivity of the human eye is directional, it is more sensitive to horizontal and vertical directions, and le...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for guiding coding quantization process by visual perception characteristics. The method includes the following operating steps that (1) a brightness value of each frame of an input video sequence is read, and a just noticeable distortion threshold value model of a frequency domain is established; (2) prediction on each frame passing through the viewpoints and between the viewpoints in the input video sequence is performed; (3) residual error data are subjected to discrete cosine transform; (4) a quantifying step size of each macro block in the current frame is dynamically adjusted; (5) lagrangian parameters in a rate-distortion optimization process are dynamically adjusted; and (6) quantized data are subjected to entropy coding to form a code stream, and the code stream is transmitted through network. By means of the method, the video compression efficiency is improved under the condition that subjective qualities are basically unchanged, and the video is suitable for being transmitted in the network.

Description

technical field [0001] The invention relates to the technical field of multi-viewpoint video coding and decoding, in particular to a method for guiding the multi-viewpoint video coding and quantization process by using visual perception characteristics, which is suitable for high-definition 3D video signal coding and decoding. Background technique [0002] With the development of the times, people have higher and higher requirements for audio-visual experience, and are not satisfied with the existing single-view two-dimensional video. People have higher and higher requirements for stereoscopic experience, and stereoscopic perception from a fixed angle can be experienced from any angle, thus giving birth to the development of multi-viewpoint coding technology. However, the data required by multi-viewpoints is greatly increased, and how to effectively improve the video compression efficiency has become a research hotspot. At present, video compression technology mainly focuse...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/26H04N7/30H04N13/00H04N19/124H04N19/154
Inventor 王永芳商习武刘静宋允东张兆杨
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products