Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Adaptive Removal Method of Video Compression Artifacts Based on Deep Learning

A video compression and deep learning technology, applied in the field of video processing, can solve the problems of increased coding complexity, lack of adaptive ability, weak robustness, etc., to enhance nonlinear expression ability, alleviate the problem of gradient disappearance, strengthen communication and The effect of multiplexing

Active Publication Date: 2020-11-17
福建帝视科技集团有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

While alleviating video compression artifacts, these two built-in filters also increase the complexity of encoding and affect the real-time performance of the encoding algorithm.
[0005] In general, these traditional video compression artifact removal methods have the following problems: First, filters need to be manually designed, and such filters are usually only for a certain type of artifact, and the versatility is poor
Second, the threshold of the filter needs to be set according to experience. The setting of the threshold usually has a greater impact on the filtering results, and the robustness is weak.
Third, the use of embedded filters to alleviate video compression artifacts increases coding complexity and affects the real-time performance of coding algorithms
Fourth, traditional algorithms usually seldom use the effective information generated by the encoding process, it is difficult to automatically adjust the filter strength, and the adaptive ability is weak
However, the current out-of-loop filtering method based on deep learning technology still lacks the ability of self-adaptation
In other words, a single convolutional neural network model cannot handle video artifacts of various intensities well
The transcoding of Internet video usually adopts a constant bit rate (CBR) method, which will lead to various compression artifacts of different strengths in the same video sequence

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Adaptive Removal Method of Video Compression Artifacts Based on Deep Learning
  • An Adaptive Removal Method of Video Compression Artifacts Based on Deep Learning
  • An Adaptive Removal Method of Video Compression Artifacts Based on Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Such as Figure 1-5 One of them is shown in order to make researchers in the technical field better understand the technical solution applied for in the present invention. Next, the technical solutions in the embodiments of the application of the present invention will be described more completely in combination with the drawings in the embodiments of the application. The described embodiments are only some, not all, embodiments of the present application. On the basis of the embodiments described in this application, other embodiments obtained by those skilled in the art without creative efforts shall all fall within the protection scope of this application.

[0036] attached by figure 1 It can be seen that the present invention requires two implementation stages, namely, the image quality prediction stage and the artifact removal stage. The invention discloses a method for adaptively removing video compression artifacts based on deep learning, which includes the fo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video compression artifact adaptive removing method based on depth learning, which adopts a depth-dense connected convolution network to automatically extract the compressioncharacteristics of a video frame, and can effectively avoid the shortcomings caused by the manual design of a filter in the traditional method. The invention acts on the post-processing stage of thevideo, and does not affect the processing flow and the real-time performance of the existing video coding and decoding algorithm. A new image quality prediction model is proposed to realize automaticselection of compression artifacts with different intensities, which has strong adaptive ability. Deep-connected convolution network is used to remove video compression artifacts, which can effectively alleviate the problem of gradient disappearance, deepen the network structure and enhance the network non-linear expression ability. At the same time, the network can also make full use of the characteristics of the middle layer, which not only enhances the propagation and multiplexing of features, but also greatly reduces the network parameters.

Description

technical field [0001] The invention relates to the field of video processing and deep learning technology, in particular to a method for adaptively removing video compression artifacts based on deep learning. Background technique [0002] Video compression artifact removal is a technique used to improve video quality. Among them, the compression artifacts of the video are generated by the encoding method of the video. [0003] With the rapid growth of Internet video data, in order to control the cost of video storage and transmission, a higher compression rate is usually used in video encoding. Generally speaking, in the process of video encoding, lossy compression algorithms are used, such as the common MPEG and H.26X series. Among them, H.264 is the most widely used video encoding method. While reducing the video size, these encoding methods will introduce video artifacts caused by compression, such as block effects, ringing effects, flickering effects, and mosquito no...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/117H04N19/86
CPCH04N19/117H04N19/86
Inventor 苏建楠林宇辉黄伟萍李根童同高钦泉
Owner 福建帝视科技集团有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products