Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Inter-frame prediction method and device, video encoder and video decoder

An inter-frame prediction and preset position technology, applied in the field of video coding and decoding, can solve the problem of high complexity, reduce the number of comparisons, improve the efficiency of inter-frame prediction, and improve the performance of coding and decoding

Pending Publication Date: 2020-03-06
HUAWEI TECH CO LTD
View PDF23 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In the above-mentioned process of updating the motion information candidate list, it is necessary to judge whether the two motion information are the same. The traditional solution is generally to determine whether the parameters such as the prediction direction, the reference frame, and the horizontal and vertical components of the motion vector of the two motion information are the same. To determine whether two motion information are the same, multiple comparison operations are required, and the complexity is high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Inter-frame prediction method and device, video encoder and video decoder
  • Inter-frame prediction method and device, video encoder and video decoder
  • Inter-frame prediction method and device, video encoder and video decoder

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0211] Example 1: Construction method 1 of the motion information candidate list in the affine mode.

[0212] In Example 1, every time an affine coding block that meets the requirements is found, the candidate motion information of the current image block will be determined according to the motion information of the control points of the affine coding block, and the obtained candidate motion information will be added to the motion information candidate list.

[0213] In the first instance, the motion information of the candidate control points of the current image block is derived mainly by using the inherited control point motion vector prediction method, and the derived motion information of the candidate control points is added to the motion information candidate list. The specific process of example one is as follows Figure 8 as shown, Figure 8 The shown process includes step 401 to step 405, and step 401 to step 405 will be described in detail below.

[0214] 401. Ac...

example 2

[0234] Example 2: Construction method 2 of the motion information candidate list in the affine mode.

[0235] Different from the first example, in the second example, the coding block candidate list is constructed first, and then the motion information candidate list is obtained according to the coding block candidate list.

[0236] It should be understood that, in the first example above, every time an affine coding block at an adjacent position is determined, the motion information of the candidate control points of the current image block is derived according to the motion information of the control points of the affine coding block. In the second example, all the affine coding blocks are determined first, and then the motion information of the candidate control points of the current image block is derived according to all the affine coding blocks. In the second example, the candidate information in the motion information candidate list is generated once. Compared with the ...

example 3

[0258] Example 3: Construction method 1 of constructing a motion information candidate list in translation mode.

[0259] The specific process of example three is as follows Figure 10 as shown, Figure 10 The shown process includes steps 601 to 605, which are described below.

[0260] 601. Acquire a motion information candidate list of a current image block.

[0261] The specific process of step 601 is the same as the specific process of step 401 in the first example, and will not be described again here.

[0262] 602. Traverse the adjacent positions of the current image block, and acquire the coding block where the current adjacent position is located.

[0263] Different from step 402 in the first example, what is found by traversing adjacent positions here may be a common translation block. Of course, the coding blocks traversed here may also be affine coding blocks, which are not limited here.

[0264] 603. Determine whether the motion information candidate list is em...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an inter-frame prediction method and device, a video encoder and a video decoder. The method comprises the steps: determining N target image blocks from M image blocks where M adjacent positions of a current image block are located respectively, wherein any two target image blocks in the N target image blocks are different, M and N are both positive integers, and M is largerthan or equal to N; determining candidate motion information of the current image block according to the motion information of the N target image blocks, and adding the candidate motion information of the current image block into a motion information candidate list of the current image block; and performing inter-frame prediction on the current image block according to the motion information candidate list. According to the invention, the comparison operation when the motion information candidate list is obtained can be reduced, and the inter-frame prediction efficiency can be improved.

Description

technical field [0001] The present application relates to the technical field of video coding and decoding, and more specifically, relates to an inter-frame prediction method and device, as well as a video encoder and a video decoder. Background technique [0002] Digital video capabilities can be incorporated into a wide variety of devices, including digital television, digital broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, electronic Book readers, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cellular or satellite radiotelephones (so-called "smart phones"), video teleconferencing devices, video streaming devices and its analogues. Digital video devices implement video compression techniques such as those defined in MPEG-2, MPEG-4, ITU-TH.263, ITU-T H.264 / MPEG-4 Part 10 Advanced Video Coding, Video Coding Standard H 265 / High Effi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/503H04N19/147H04N19/91H04N19/176H04N19/44
CPCH04N19/503H04N19/147H04N19/91H04N19/176H04N19/44H04N19/52H04N19/54H04N19/105H04N19/56H04N19/137H04N19/159
Inventor 符婷陈焕浜杨海涛
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products