Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Facial expression generation method based on generative adversarial network

A facial expression and generative technology, applied in the field of computer vision, can solve problems such as the inability to specify a face, single face in the expression database, poor facial expression effect, etc., and achieve the effect of maintaining continuity and authenticity

Active Publication Date: 2021-06-18
SHENZHEN INST OF ADVANCED TECH
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] After analysis, in the existing schemes for generating facial expression videos based on deep learning, videos are usually generated based on noise, but due to reasons such as a small facial expression database, the generated faces are relatively single, and faces cannot be specified; Video models are less effective at facial expressions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial expression generation method based on generative adversarial network
  • Facial expression generation method based on generative adversarial network
  • Facial expression generation method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangements of components and steps, numerical expressions and numerical values ​​set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.

[0022] The following description of at least one exemplary embodiment is merely illustrative in nature and in no way taken as limiting the invention, its application or uses.

[0023] Techniques, methods and devices known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods and devices should be considered part of the description.

[0024] In all examples shown and discussed herein, any specific values ​​should be construed as exemplary only, and not as limitations. Therefore, other instances of the exemplary embodiment may have dif...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a face expression generation method based on a generative adversarial network. The method comprises the following steps: constructing a deep learning network model, wherein the deep learning network model comprises a recurrent neural network, a generator, an image discriminator, a first video discriminator and a second video discriminator; the recurrent neural network generates a time-dependent motion vector for an input image, the generator takes the motion vector and the input image as input and outputs a corresponding video frame, the image discriminator is used for judging the authenticity of each video frame, the first video discriminator judges the authenticity of the video and classifies the video, and the second video discriminator controls the authenticity and smoothness of the generated video change; using sample images containing different expression categories as input to train the deep learning network model; and generating a face video in real time by using the trained generator. Facial features are reserved while expressions are generated, the generated video keeps continuity and authenticity, and generalization ability is achieved for different faces.

Description

technical field [0001] The present invention relates to the technical field of computer vision, and more specifically, to a method for generating human facial expressions based on a generative confrontation network. Background technique [0002] In terms of face generation, 3DMM (face 3D deformation statistical model) generates faces by changing parameters such as shape, texture, posture, and illumination. DRAW (Deep Recursive Writer) uses Recurrent Neural Network (RNN) to realize image generation, and Pixel CNN uses Convolutional Neural Network (CNN) instead of RNN to realize pixel-by-pixel image generation. [0003] After the emergence of generative confrontation network (GAN), it has been widely used in image generation, and more and more GAN-based models have been applied to facial expression conversion. For example, ExprGAN (Expression Editing Based on Controllable Intensity) combines conditional generative adversarial networks and adversarial auto-transcoders to achie...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/174G06V20/41G06V20/46G06N3/045
Inventor 王蕊施璠曲强姜青山
Owner SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products