Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A virtual social method based on avatar expression transplantation

A technology of expression and facial features, applied in virtual social, virtual social field based on Avatar expression transplantation

Active Publication Date: 2021-11-05
SHANGHAI UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to technical limitations such as expression capture and network transmission, building a virtual social system with expression capture function brings great challenges

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A virtual social method based on avatar expression transplantation
  • A virtual social method based on avatar expression transplantation
  • A virtual social method based on avatar expression transplantation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] see Figure 1 ~ Figure 4 , based on the virtual social method of Avatar expression transplantation, it is characterized in that, the specific steps are as follows:

[0031] Step 1. Use SDM to extract face feature points from the real-time input video stream:

[0032] The supervised descent method SDM that minimizes the nonlinear least squares function is used to extract face feature points in real time, that is, the direction of descent that minimizes the average value of the NLS function of different sampling points during training; in the test phase, through OpenCV face Detect and select the region of interest of the face and initialize the average 2D shape model, so the solution to the face alignment problem becomes finding the gradient direction step size, so the direction of learning descent is used to minimize NLS, thereby realizing real-time 2D face features point extraction;

[0033] Step 2. Facial semantic features are used as the input of the DDE model train...

Embodiment 2

[0046] This embodiment is basically the same as Embodiment 1, especially in that:

[0047] 1. The first step uses SDM to extract face feature points from the real-time input video stream, and learns a series of descending directions and scales in this direction from the public image set, so that the objective function converges at a very fast speed to the minimum value, thereby avoiding the problem of solving the Jacobian matrix and the Hessian matrix.

[0048] 2. Based on the virtual social method of Avatar expression transplantation, it is characterized in that: utilize the DDE model of CPR training in the described step 2, obtain the method for expression coefficient and head movement parameter: Blendshape expression model realizes expression by the linear combination of basic posture For re-enactment of animations, a given facial expression of different people corresponds to a similar set of basic weights, making it convenient to transfer the performer's facial expression ...

Embodiment 3

[0052] A virtual social method based on Avatar expression transplantation, see figure 1 , the main steps are: use SDM to extract face feature points from the real-time input video stream; 2D facial semantic features are used as the input of the DDE model trained by CPR, and the output expression coefficients and head motion parameters are transplanted to Avatar; the DDE model output expression coefficients for expression encoding grouping and emotion classification; through the network transmission strategy to realize the synchronization of expression animation audio data, such as figure 2 shown.

[0053] 1. Use SDM to extract face feature points from the real-time input video stream:

[0054] The supervised descent method SDM that minimizes the nonlinear least squares function is used to extract face feature points in real time, that is, the direction of descent to minimize the average value of the NLS function of different sampling points is learned during training, and th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a virtual social interaction method based on Avatar expression transplantation. The specific operation steps of this method are: one utilizes SDM (supervised descent method) to extract face feature points from the video stream of real-time input; two, facial semantic features are used as DDE (displacement dynamic expression) of CPR (cascade attitude regression) training The input of the model, the output expression coefficients and head movement parameters are transplanted to Avatar (virtual avatar); 3. The expression coefficients output by the DDE model are grouped by expression encoding and emotion classification; 4. The expression animation audio synchronization is realized through the network transmission strategy. The invention can capture the user's facial expression in real time and replay the expression on the Avatar, and build a virtual social network of network communication technology.

Description

technical field [0001] The invention relates to the technical fields of computer vision, computer graphics, human face animation and network communication, in particular to a virtual social method based on Avatar expression transplantation, which can capture user facial expressions in real time and perform expression replay on Avatar, and build network communication Technological virtual socialization. Background technique [0002] At present, virtual social systems have sprung up like mushrooms in the market, and the business ideas are also different, mainly divided into three types: tool type, UGC type and full experience type. Among the tool types, the mobile virtual social network platform VTime is the most representative, through VR helmet access, head movement to realize interactive control of the human-machine interface, navigation of the virtual world, and voice communication, but the virtual character image it provides is relatively It is fixed and supports relativ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06Q50/00
CPCG06Q50/01G06V40/176G06V40/171G06V40/174
Inventor 黄东晋姚院秋肖帆蒋晨凤李贺娟丁友东
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products