Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

JND (Just-noticeable difference) based video encoding method and device

A video coding and JND technology, applied in the field of video coding, can solve the problems of inaccurate JND model and reduce the quality of compressed video, and achieve the effect of good quality, accurate effect and improved compression efficiency.

Active Publication Date: 2017-02-22
ELECTRIC POWER RES INST OF GUANGDONG POWER GRID
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Embodiments of the present invention provide a JND-based video coding method and device. By establishing multiple sub-models of the JND model and determining the basic model from the sub-models, the threshold of the basic model is used as the basic threshold. The weighting coefficients of other sub-models are superimposed on the threshold value, and the offset effects of different sub-models are subtracted at the same time, so as to obtain a more accurate JND model and threshold value, which solves the problem of multiplying the effects of several sub-models as weighting coefficients in the prior art. The directly superimposed threshold results in inaccurate JND models and reduces the technical problems of compressed video quality

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • JND (Just-noticeable difference) based video encoding method and device
  • JND (Just-noticeable difference) based video encoding method and device
  • JND (Just-noticeable difference) based video encoding method and device

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0045] see figure 1 A first embodiment of a JND-based video coding method provided by an embodiment of the present invention includes:

[0046] 101. Determine the base model according to the preset spatial contrast sensitivity submodel, the preset brightness masking factor submodel, the preset texture masking submodel, and the preset temporal contrast sensitivity submodel, and calculate the threshold of the base model as the base threshold T basic ;

[0047] In this embodiment, firstly, it is necessary to determine the basic model according to the preset sub-model of spatial contrast sensitivity, the preset sub-model of luminance masking factor, the preset sub-model of texture masking, and the preset sub-model of temporal contrast sensitivity and Calculate the threshold of the base model as the base threshold T basic .

[0048] 102, through the first formula JND=T basic ×(F 1 ×F 2 ×F 3 -α×F 1 ×F 2 -β×F 2 ×F 3 -γ×F 1 ×F 3 ) to integrate and calculate the final thr...

no. 2 example

[0050] see figure 2 A second embodiment of a JND-based video coding method provided by an embodiment of the present invention includes:

[0051] 201. Perform motion compensation processing on the pixel blocks of the video to obtain residuals and change the residuals to obtain DCT coefficients;

[0052] In this embodiment, it is first necessary to perform motion compensation processing on the pixel blocks of the video to obtain residuals, and then change the residuals to obtain DCT coefficients.

[0053] 202. Establish a spatial contrast sensitivity submodel, a preset brightness masking factor submodel, a preset texture masking submodel, and a preset temporal contrast sensitivity submodel, all of which include weighting coefficients;

[0054] In this embodiment, after performing motion compensation processing on video pixel blocks to obtain residuals and changing the residuals to obtain DCT coefficients, it is also necessary to establish a spatial contrast sensitivity sub-mod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a JND (Just-noticeable difference) based video encoding method and device. According to the invention, a plurality of sub-models of a JND model are built, a foundational model is determined from the sub-models, threshold superposition is performed according to weighting coefficients of other sub-models except for the foundational model by taking the foundational model as a foundational threshold, and an offset effect of different sub-models is deducted at the same time so as to acquire a more accurate JND model and a more accurate threshold. The method and the device disclosed by the invention solve technical problems that the JND model is inaccurate and the quality of compressed video is reduced in the prior art because of the direct superposition threshold calculated via multiplication by taking effects of the plurality of sub-models as weighting coefficients.

Description

technical field [0001] The present invention relates to the field of video coding, in particular to a JND-based video coding method and device. Background technique [0002] The goal of video coding technology is to compress and encode video, and achieve a larger compression ratio while ensuring the same video quality, so that it can be transmitted with a smaller bandwidth. Many existing video coding methods, such as H.264, AVS, etc., all focus on removing objective redundancy in video sequences, that is, achieving maximum compression efficiency with minimum objective distortion. The so-called objective distortion refers to the distortion of the image calculated with objective indicators, which is completely compared with the source image without considering human physiological factors. Commonly used indicators such as PSNR (peak signal-to-noise ratio) and so on. However, the images seen by the human eye are affected by human physiological factors, and some objective disto...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/90H04N19/625
CPCH04N19/625H04N19/90
Inventor 唐曦凌王琦
Owner ELECTRIC POWER RES INST OF GUANGDONG POWER GRID
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products