Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Object and fractal-based multi-ocular three-dimensional video compression encoding and decoding method

A stereoscopic video, compression coding technology, applied in stereoscopic systems, digital video signal modification, television, etc., can solve the problems of slow coding speed, large amount of calculation, difficult to meet the requirements of compression time and image quality

Inactive Publication Date: 2011-02-23
BEIHANG UNIV
View PDF0 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the cyclic predictive coding (CPM) method, in order to ensure that the initial frame can approximately converge to the original image after its own cyclic decoding, the compression process needs to go through complex transformations, searches, and iterations, etc., and the compression time and image quality are difficult to meet the requirements.
At present, the typical fractal image and video compression method has a large amount of calculation, the encoding speed is slow, and the quality of decoding needs to be improved, so the fractal image and video compression method needs further improvement and improvement

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Object and fractal-based multi-ocular three-dimensional video compression encoding and decoding method
  • Object and fractal-based multi-ocular three-dimensional video compression encoding and decoding method
  • Object and fractal-based multi-ocular three-dimensional video compression encoding and decoding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0075] The method of the present invention will be further described in detail below in conjunction with the accompanying drawings. Only the brightness component Y is taken as an example, and the compression steps of the color difference components U and V are the same as the brightness components.

[0076] The invention proposes a method for compressing and decompressing multi-objective stereoscopic video based on objects and fractals. In multi-eye stereo video coding, the middle object is selected as a reference object, compressed using the MCP principle, and other objects are compressed based on the principle of DCP+MCP. Taking the trinocular video as an example, the intermesh video is used as a reference object, and a separate motion compensation prediction method (MCP) is used for encoding. First, the video segmentation method is used to obtain the video object segmentation plane, that is, the Alpha plane, and the block DCT transform coding is used for the initial frame. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an object and fractal-based multi-ocular three-dimensional video compressing and decompressing method. In multi-ocular three-dimensional video encoding, a middle ocular video is selected as a reference ocular video and is compressed by using a motion compensation prediction (MCP) principle; and other ocular videos are compressed by using a disparity compensation prediction (DCP)+MCP-based principle. Taking a trinocular video as an example, the middle ocular video is selected as the reference ocular video and is encoded by using a single MCP mode principle, wherein the method comprises the following steps of: first, acquiring a video object segmentation plane, namely an Alpha plane, by a video segmentation method, encoding a start frame by adopting discrete cosine transform (DCT), and performing block motion estimation / compensation encoding on a non-I frame; then, judging the area attribute of an image block by utilizing the Alpha plane, if the block is not in a currently encoded video object area, then not processing the external block, and if all of the block is in the currently encoded video object area, searching a most similar matching block in a reference frame of a previous frame, namely the middle ocular video, by a full search method; and finally, compressing coefficients of an iteration function system by utilizing a Huffman encoding method, if part pixels of the block are in the currently encoded video object area while part pixels are not in the currently encoded video object area, independently processing a boundary block. A left ocular video and a right ocular video are respectively encoded by adopting the MCP+DCP mode; and when the DCP encoding mode is performed, the polarization and the direction in a three-dimensional parallel camera structure are fully utilized.

Description

Technical field: [0001] The invention belongs to the field of video compression coding, relates to compression coding of multi-eye stereoscopic video, in particular to a video compression coding method based on objects and fractals. It lays the foundation for the real-time application of multi-eye stereo video coding, further improves the performance of fractal multi-eye stereo video compression coding, and makes it more practical and popular. Background technique: [0002] The concept of Object-Based (OB) coding was first proposed by the MPEG-4 standard. Using the object-based video compression coding method enables the foreground objects and background objects of each frame of video to be independently coded, which can further improve compression. At the same time, some new functions can be realized at the decoding end, such as independent transmission and decoding for each video object, object and background replacement, object-based video retrieval, especially compared t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N13/00H04N7/50H04N7/26H04N19/176H04N19/51H04N19/527H04N19/57H04N19/597H04N19/61H04N19/625
Inventor 祝世平侯仰拴陈菊嫱王再阔
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products