Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face motion synthesis method based on voice driving, electronic equipment and storage medium

A face motion and voice signal technology, applied in the field of computer information, can solve the problems of difficult target data migration, stiff face motion effect, and failure to achieve one-time training, etc., to achieve vivid and lifelike effects

Active Publication Date: 2021-09-17
CLOUDMINDS BEIJING TECH CO LTD
View PDF5 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The target data for the training of the VOCA model is the corner position of the character model virtualized by 3D visual effects synthesis software such as FALME. Since the number of corner points of the character model synthesized by FLAME is fixed, it is difficult to migrate the target data to a custom one. In virtual characters, it is impossible to achieve the effect of one-time training and multi-scenario application
In addition, the voca model usually only models mouth movements, and there are no movements in many other parts of the face, such as raising eyebrows, blinking, etc., which will cause the output face movement effect to be stiff

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face motion synthesis method based on voice driving, electronic equipment and storage medium
  • Face motion synthesis method based on voice driving, electronic equipment and storage medium
  • Face motion synthesis method based on voice driving, electronic equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention more clear, various implementation modes of the present invention will be described in detail below in conjunction with the accompanying drawings. However, those of ordinary skill in the art can understand that, in each implementation manner of the present invention, many technical details are provided for readers to better understand the present application. However, even without these technical details and various changes and modifications based on the following implementation modes, the technical solution claimed in this application can also be realized.

[0023] In the existing speech-driven facial motion synthesis schemes, the mouth movement is mainly driven by speech. For example, let the virtual character say "the weather is really nice today", then while the voice is playing, the mouth movement of the virtual character should be basically the same as that...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention relates to the technical field of computer information, and discloses a face motion synthesis method based on voice driving, electronic equipment and a storage medium. The method comprises the following steps: processing a voice signal of a to-be-recognized face motion to obtain an audio vector corresponding to the voice signal; inputting the audio vector into a parameter recognition model for processing, and outputting a face muscle motion parameter corresponding to the to-be-recognized face motion; and moving angular points on a plurality of elastomers divided according to face muscle distribution in a face model through the face muscle motion parameters of the to-be-recognized face motion, and obtaining a to-be-recognized face motion result. The scheme can be universally applied to a character model containing a plurality of angular point numbers, and the output human face motions are rich and the expression effect is natural.

Description

technical field [0001] Embodiments of the present invention relate to the field of computer information technology, and in particular to a method for synthesizing human facial movements based on voice drive, electronic equipment and storage media. Background technique [0002] Whether it is a robot in reality or a virtual character or model in a computer, how to automatically lip-sync a virtual character or model through audio is a difficult problem in the industry. Even after years of research and development, this problem still plagues relevant practitioners personnel. [0003] At present, there are many ways to drive the mouth shape of virtual characters based on voice, the most commonly used is the VOCA (Voice Operated Character Animation) model. The target data for VOCA model training is the corner position of the character model virtualized by 3D visual effects synthesis software such as FALME. However, since the number of corner points of the character model synthesi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G10L15/08G10L15/16
CPCG10L15/16G10L15/08G06N3/045G06F18/22G06F18/241G06F18/214
Inventor 彭飞马世奎
Owner CLOUDMINDS BEIJING TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products