Face feature point coding and decoding method, device and system

A codec method and technology of face features, applied in the codec method of face feature points, equipment and system fields, can solve the problems of slow change of model parameter vector in time domain, reduce data volume, etc.

Inactive Publication Date: 2017-11-14
PALMWIN INFORMATION TECH SHANGHAI
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the model parameter vector obtained by using this compression method changes slowly in the time domain, and the face model parameters of the front and rear frames in the video still have a lot of r...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face feature point coding and decoding method, device and system
  • Face feature point coding and decoding method, device and system
  • Face feature point coding and decoding method, device and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0097] The embodiment of the present invention provides a method for encoding and decoding human face feature points, referring to figure 1 As shown, the method includes:

[0098] 101. The sender acquires feature point information of the entire face in the current video frame.

[0099] Wherein, the feature point information is used to describe at least one of the outline, eyebrows, eyes, nose and mouth of the human face.

[0100] 102. Calculate the difference between the feature point information of half of the face and the feature point information of the other half of the face, where the half of the face includes the left half or the right half of the face.

[0101] Specifically, a difference between each feature point information of the half of the faces and corresponding feature point information of the other half of the faces is calculated.

[0102] 103. The sender encodes the feature point information of the half part of the face and the difference.

[0103] 104. The ...

Embodiment 2

[0110] The embodiment of the present invention provides a method for encoding and decoding human face feature points, referring to figure 2 As shown, the method includes:

[0111] 201. The sender acquires feature point information of the entire face in the current video frame.

[0112] Wherein, the feature point information is used to describe at least one of the outline, eyebrows, eyes, nose and mouth of the human face.

[0113] The method includes a sender electronic device.

[0114] The entire human face can include the outline, eyebrows, eyes, nose and mouth of the entire human face, or it can include the outline of the entire human face, eyes and mouth, or it can be the outline of the entire human face, eyes, nose and mouth. It can be eyes and mouth, and can also be eyes, nose and mouth. The whole human face can also be other, and the embodiment of the present invention does not limit the specific human face; The acquired real face of the user may also be a face in wh...

Embodiment 3

[0144] The embodiment of the present invention provides a kind of encoding method of human face feature point, refer to image 3 As shown, the method includes:

[0145] 301. Acquire feature point information of the entire human face in the current video frame.

[0146] Wherein, the feature point information is used to describe at least one of the outline, eyebrows, eyes, nose and mouth of the human face.

[0147] Specifically, the steps are similar to step 201 in Embodiment 2, and will not be repeated here, wherein, the subject of obtaining the feature point information of the entire face in the current video frame may be the electronic device of the sender, or may It is the receiving electronic device, and may also be other electronic devices, which are not limited in this embodiment of the present invention.

[0148] 302. Calculate the difference between the feature point information of half of the faces and the feature point information of the other half of the faces.

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention discloses a face feature point coding and decoding method, a device and a face feature point coding and decoding system. The method includes the following steps that: a sender acquires the feature point information of an entire face in a current video frame; the difference value of the feature point information of one half of the face and the feature point information of the other half of the face is calculated; the sender codes the feature point information and difference value of each half of the face; the sender sends the coded feature point information and difference value of each half of the face to a receiver; the receiver decodes the coded feature information and difference value of each half of the face; and the receiver generates a corresponding face according to the decoded feature point information and difference value of each half of face. According to the face feature point coding and decoding method, the device and the face feature point coding and decoding system of the present invention, the feature point information and difference value of each half of the face is coded, and therefore, the spatial redundancy of the coding and decoding of the face feature points of the face can be decreased, video encoding data are reduced, encoding efficiency is improved, and less bandwidth is occupied.

Description

technical field [0001] The invention relates to the field of video encoding and decoding, in particular to a method, device and system for encoding and decoding facial feature points. Background technique [0002] In recent years, with the rapid development of the video industry and smart phones, applications such as Facetime and Tango have made multimedia communication popular on mobile terminals, and the interaction with others through video calls will become more and more popular. However, the video call increases the multimedia data rapidly and occupies a large amount of bandwidth in the transmission process, so a codec method that can reduce the amount of data in the video call process and reduce the occupation of the transmission bandwidth by the code stream is needed. [0003] In the prior art, in order to improve the compression efficiency of faces in video frames, some researchers have proposed a series of model-based video coding methods based on the features of fa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N7/14G06T9/00
CPCH04N7/141G06T9/00H04N7/142
Inventor 武俊敏
Owner PALMWIN INFORMATION TECH SHANGHAI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products