Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face expression editing method based on generative adversarial networks

A facial expression and generative technology, applied in the field of computer vision, can solve problems such as difficult to achieve facial texture modification, and achieve the effect of maintaining facial identity information

Active Publication Date: 2018-06-15
SEETATECH BEIJING TECH CO LTD
View PDF5 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, the facial expression editing method based on the 3D face model is difficult to modify the texture of the face

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face expression editing method based on generative adversarial networks
  • Face expression editing method based on generative adversarial networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0024] A facial expression editing method based on a generative confrontation network, the overall steps are:

[0025] Step S1, data preparation stage

[0026] a. Manually label each face in the RGB image collection, and label the face identity information and facial expression information; the label information of each picture is represented by [i, j], i means that the picture belongs to the i-th Individual (0≤i<N), j means that the picture belongs to the jth expression (0≤j<M); the entire picture set contains N individuals and M expressions;

[0027] b. Cut out the faces in the marked image collection from the picture through the face detector and the face feature point detector, and perform face alignment;

[0028] Step S2, model design stage

[0029] a. The model consists of two parts, namely the generator G and the discriminat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a face expression editing method based on generative adversarial networks. The method includes the overall steps of: entering a data preparation stage, wherein face images aremanually labeled and clipped; entering a model design stage, wherein a model is generated by a generator and a discriminator; entering a model training stage, wherein real face images with labels andimages generated by the generator are input into the discriminator, the discriminator is trained and enabled to be used to distinguish distribution of real samples and generated samples and learn distribution of face expressions and distribution of face identity information, then to-be-edited face pictures and expression control vectors are input into the generator, face pictures controlled by theexpression control vectors are output, then real training is carried out on the trained discriminator, and the above-mentioned step is repeated to complete construction of the model; and inputting images to test the constructed model. The method can ensure that the generator generates face images which are closer to real face image distribution, better maintain face identity information, and aremore efficient in expression editing.

Description

technical field [0001] The invention relates to an editing method, in particular to a human facial expression editing method based on a generative confrontation network, and belongs to the technical field of computer vision. Background technique [0002] Facial expression editing requires controlling facial expressions in photos while maintaining facial identity information. This technology has been widely used in face animation, social software, face recognition dataset augmentation and other fields. The current facial expression editing methods are all based on the 3D deformable face model. The representative methods are: Patent No. 201310451508.9, a facial expression editing method based on single camera and motion capture data. The main technical means are: using the user The user's 3D face model is generated from the photos, and at the same time, the 3D face model is decoupled to separate the identity and expression; and then a new 3D face model is synthesized by contro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T11/00G06N3/04
CPCG06T11/001G06N3/045
Inventor 张刚韩琥张杰山世光陈熙霖
Owner SEETATECH BEIJING TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products