Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth video encoding method based on edges and oriented toward virtual visual rendering

A technology of depth video and coding method, which is applied in the fields of digital video signal modification, electrical components, image communication, etc., can solve the problem of limited improvement of coding efficiency, and achieve the effect of protecting edge information, shortening coding time, and improving quality

Inactive Publication Date: 2014-08-20
SHANGHAI UNIV
View PDF2 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method makes full use of the sharp transition between the edge and the smooth area in the depth map, thereby improving the coding efficiency of the depth map while ensuring the quality of virtual vision, but the improvement of the coding efficiency is limited.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth video encoding method based on edges and oriented toward virtual visual rendering
  • Depth video encoding method based on edges and oriented toward virtual visual rendering
  • Depth video encoding method based on edges and oriented toward virtual visual rendering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] One implementation of the invention is as follows.

[0038] The present invention takes encoding reference software JM18.0 and virtual visual synthesis reference software VSRS3.5 as experimental platforms, and the video sequences in the test are as follows: figure 1 As shown, the table shows the parameters of Akko&Kayo sequence, Breakdancers sequence and Ballet sequence. The above three video sequences are 50 frames, 100 frames and 100 frames respectively, and the resolutions are 640×480, 1024×768 and 1024×768. Akko&Kayo The coding viewpoints of the sequence are viewpoint 27 and viewpoint 29, and the coding viewpoints of Breakdancers sequence and Ballet sequence are both viewpoint 0 and viewpoint 2.

[0039] see figure 2 , a kind of edge-based coding method of the depth video drawn for virtual view of the present invention, its steps are:

[0040](1), edge detection: use the Sobel edge detection algorithm to process the macroblock (Microblock, MB) of the depth map, a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth video encoding method based on edges and oriented toward virtual visual rendering. The method comprises the steps that (1) the edges are detected, macro blocks of a depth map are processed through a Sobel edge detection algorithm, and edge values of the macro blocks of the depth map are detected out; (2) the types of the macro blocks of the depth map are classified, a threshold value lambda used for classifying the macro blocks is set, the edge values of the macro blocks of the depth map are compared with the set threshold value lambda, and the macro blocks are classified into edge areas and flat areas; (3) the macro blocks of the depth map are encoded to obtain encoded macro blocks of the depth map, and different encoding forecasting modes are adopted; (4) mid-value mean shift filtering is carried out, a mean shift filter is used for removing a block effect of the encoded macro blocks of the depth map in the edge areas, and the edges are protected. The algorithm improves the compression speed of the depth map and the encoding quality of the depth map of the virtual vision on the premise that the subjective quality of virtual vision videos is basically not changed.

Description

technical field [0001] The present invention relates to a depth video encoding method, in particular to an edge-based depth video encoding method oriented to virtual view rendering. Background technique [0002] 3D video can provide the audience with a three-dimensional depth of visual effects, enhance the sense of visual reality and lifelike. There is an existing depth-based multi-view stereo system, which first uses a depth estimation algorithm to obtain depth information from two channels of color video; then encodes and transmits one or more channels of color video and depth information; finally, at the decoding end , using DepthImageBasedRendering (DIBR) technology to synthesize 8 viewpoints. Therefore, compared with the color video multi-view stereo system, the depth-based multi-view stereo system can significantly reduce the amount of transmitted data, and can realize real-time transmission within a limited bandwidth. [0003] A depth video is composed of a frame-by...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/597H04N19/147H04N19/567
Inventor 安平刘超左一帆赵冰闫吉辰张兆扬
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products