Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video compression through motion warping using learning-based motion segmentation

A motion, video sequence technology, applied in the field of video data encoding and decoding

Pending Publication Date: 2020-09-11
GOOGLE LLC
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Digital video streams can contain large amounts of data and consume substantial computing or communication resources of computing devices used for processing, transmission, or storage of video data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video compression through motion warping using learning-based motion segmentation
  • Video compression through motion warping using learning-based motion segmentation
  • Video compression through motion warping using learning-based motion segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] A video stream can be compressed by various techniques to reduce the bandwidth required to transmit or store the video stream. The ability to encode a video stream into a bitstream, which involves compression, and then send that bitstream to a decoder, which is able to decode or decompress the video stream to prepare it for viewing or further processing. Compression of video streams often exploits the spatial and temporal correlation of video signals through spatial and / or motion compensated prediction.

[0022] In spatial prediction, a predicted block similar to the current block to be encoded can be generated from the values ​​of pixels surrounding (eg, previously encoded and decoded) the current block. These values ​​can be used directly for padding, or can be combined in various ways to fill in the pixel positions of the predicted block according to the prediction mode (also called intra prediction mode). By encoding the intra prediction mode and the difference bet...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Regions for texture-based coding are identified using a spatial segmentation and a motion flow segmentation. For frames of a group of frames in a video sequence, a frame is segmented using a first classifier into a texture region, a non-texture region or both of an image in the frame. Then, the texture regions of the group of frames are segmented using a second classifier into a texture coding region or a non-texture coding region. The second classifier uses motion across the group of frames as input. Each of the classifiers is generated using a machine-learning process. Blocks of the non-texture region and the non-texture coding region of the current frame are coded using a block-based coding technique, while blocks of the texture coding region are coded using a coding technique that is other than the block-based coding technique.

Description

Background technique [0001] A digital video stream can represent video using a sequence of frames or still images. Digital video can be used for a variety of applications including, for example, video conferencing, high-definition video entertainment, video advertising, or the sharing of user-generated video. Digital video streams can contain large amounts of data and consume substantial computing or communication resources of computing devices for processing, transmission, or storage of the video data. Various methods including compression and other encoding techniques have been proposed for reducing the amount of data in a video stream. [0002] One technique for compression uses a reference frame to generate a predicted block corresponding to the current block to be encoded. The difference between the predicted block and the current block may be encoded, instead of encoding the value of the current block itself, to reduce the amount of encoded data. Contents of the inve...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/11G06T7/194H04N19/103H04N19/167H04N19/17H04N19/503H04N19/593G06K9/62G06V10/764
CPCH04N19/103H04N19/167H04N19/17H04N19/503H04N19/543H04N19/593G06T7/11G06T7/194G06T7/238G06T7/40G06T2207/10016G06T2207/10024G06T2207/20084G06V10/454G06V10/82G06V10/764H04N19/172H04N19/176H04N19/527H04N19/61G06N3/08G06F18/24143
Inventor 刘宇新阿德里安·格朗热
Owner GOOGLE LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products