A virtual reference frame generation method based on parallax-guided fusion

A virtual reference and reference frame technology, which is applied in digital video signal modification, image communication, electrical components, etc., can solve the problems that the coding performance needs to be further improved, the predictive coding process is not accurate enough, and the lack of deep learning algorithms is beneficial to storage. and transmission, reducing encoding bits, and reducing compression distortion

Active Publication Date: 2022-01-04
TIANJIN UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The existing multi-view video coding method does not make full use of the disparity relationship between adjacent viewpoints, which makes the predictive coding process inaccurate; the existing algorithm for manually correcting reference pictures between viewpoints has poor applicability, and it is easy to introduce artificial artifacts. The performance needs to be further improved; at present, there are few algorithms that use deep learning technology to improve the efficiency of multi-view video coding, and there is a lack of a deep learning algorithm that can generate virtual reference frames using the parallax relationship

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A virtual reference frame generation method based on parallax-guided fusion
  • A virtual reference frame generation method based on parallax-guided fusion
  • A virtual reference frame generation method based on parallax-guided fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] The present invention proposes a virtual reference frame generation method based on parallax-guided fusion, builds a parallax-guided generation network, uses the learning characteristics of convolutional neural networks, learns and converts the parallax relationship between adjacent viewpoints and generates a high-quality The virtual reference frame provides a high-quality reference for the current coded frame, which improves the prediction accuracy and thus improves the coding efficiency. The whole process is divided into four steps: 1) Multi-level receptive field expansion; 2) Parallax attention fusion; 3) Parallax-guided generative network framework integration; 4) Embedding coding framework. The specific implementation steps are as follows:

[0042] 1. Multi-level receptive field expansion

[0043] Feature representation with rich information is crucial for image reconstruction tasks. Considering that both multi-scale feature learning and multi-level receptive field...

Embodiment 2

[0064] The method provided by the embodiment of the present invention is verified below in combination with specific experiments, see the following description for details:

[0065] The present invention transplants Zhao et al.'s method of using a separable convolutional network to generate a virtual reference frame in the field of 2D video coding to the HTM16.2 platform, and compares it with the method proposed in the present invention. Compared to the 3D-HEVC baseline, the present invention is able to achieve an average bit saving of 5.31%, and Zhao's method is able to achieve an average bit saving of 3.77%. Compared with Zhao's method, the present invention can achieve an additional 1.54% bit saving, indicating that the proposed method of the present invention is well applicable to multi-view video coding. In order to verify the effectiveness of the proposed method of the present invention more intuitively, figure 2 The visualization results of different methods to genera...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for generating a virtual reference frame based on parallax-guided fusion. The method includes the following steps: constructing a multi-level receptive field expansion module to extract multi-scale deep features; constructing a parallax attention fusion module to convert multi-scale features The disparity relationship is fused; the multi-level receptive field expansion module and the disparity attention fusion module are integrated into the network framework to build a disparity-guided generation network; the disparity-guided generation network is embedded in the coding framework to generate a virtual reference frame. The present invention provides a high-quality reference for the current coding frame, thereby improving the prediction accuracy and further improving the coding efficiency.

Description

technical field [0001] The invention relates to the field of virtual reference frame generation, in particular to a method for generating a virtual reference frame based on parallax-guided fusion. Background technique [0002] Multi-view video is a typical representation method of 3D video, which provides users with an immersive experience by recording information from multiple viewpoints of the same scene. However, the amount of multi-view video data is much larger than that of traditional color video, which brings severe challenges to the storage and transmission of multi-view video. For this reason, the 3D-HEVC coding standard introduces many coding tools suitable for multi-view video and depth video. [0003] For multi-view video coding, in addition to the traditional time-domain inter-frame prediction technology, 3D-HEVC also adopts inter-view prediction technology. During predictive coding, reconstructed frames from neighboring views are added to the reference pictur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/593H04N19/136H04N19/105H04N19/184
CPCH04N19/593H04N19/136H04N19/105H04N19/184
Inventor 雷建军张宗千郑泽勋石雅南
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products