Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A conditional generative adversarial network three-dimensional facial expression motion unit synthesis method

A 3D face, conditional generation technology, applied in animation production, image data processing, instruments, etc., can solve the problems of complex facial AU annotation and difficult to widely apply.

Pending Publication Date: 2019-06-18
TIANJIN UNIV
View PDF3 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the existing methods have made some progress in AU labeling, but because facial AU labeling is very complex and is easily affected by different face shapes, different expressions, different lighting and facial postures, facial expression movement for humanoid robots based on AU labeling Unit synthesis still faces many challenges and difficulties, and it is difficult to be widely used

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A conditional generative adversarial network three-dimensional facial expression motion unit synthesis method
  • A conditional generative adversarial network three-dimensional facial expression motion unit synthesis method
  • A conditional generative adversarial network three-dimensional facial expression motion unit synthesis method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will use the three-dimensional human face model oriented to the virtual or physical humanoid robot as a carrier to study the generation and control of the natural facial expressions of the humanoid robot. It mainly includes three aspects:

[0029] (1) First, parametrically decompose the face of the humanoid robot through the 3DMM model, and learn and model the distribution of expression parameters in the 3D face model of the humanoid robot based on the deep generative model, and establish facial movements with different intensities and different combinations Efficient mapping between cell annotations and facial expression parameter distributions;

[0030] (2) Then, through the discriminator in the conditional generative confrontation network model, the authenticity of the generated expression parameters is discriminated, and the game optimization is carried out with the generated results of the expression and motion parameter generation model;9

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of man-machine emotion interaction, intelligent robots and the like, and aims to provide a corresponding solution way for researching generation and control problemsof natural facial expressions of a humanoid robot by taking a three-dimensional face model oriented to a virtual or entity humanoid robot as a carrier. Therefore, the conditional generative adversarial network three-dimensional facial expression motion unit synthesis method comprises the following steps: (1) establishing effective mapping between facial motion unit labels with different intensities and different combinations and facial expression parameter distribution; (2) carrying out game optimization on the generation result of the expression motion parameter generation model; And (3) applying the generated target expression parameters to the three-dimensional face model facing the humanoid robot to realize generation and control of the three-dimensional face complex expression of thehumanoid robot. The method is mainly applied to intelligent robot design and manufacturing occasions.

Description

technical field [0001] The present invention relates to the fields of human-computer emotional interaction and intelligent robots, and in particular to a method for synthesizing three-dimensional facial expression motion units based on Conditional Generative Adversarial Network (CGAN) and three-dimensional morphable model (3D MorphableModel, 3DMM). The method can be widely used in scenes such as facial expression control of humanoid intelligent robots, game animation and other 3D facial expression synthesis and human-computer emotional interaction. Background technique [0002] As an important goal of future robot development, humanoid robots with natural expression interaction have attracted extensive attention from academia and business circles. The natural expression interaction process generally includes two aspects: expression recognition and expression generation. Due to the diversity of facial expressions and the complexity of hardware design of humanoid robots, how t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T13/40G06T17/00
Inventor 刘志磊张翠翠
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products