Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual social contact method based on Avatar expression transplantation

A technology of expression and least squares method, which is applied in the direction of instruments, acquisition/recognition of facial features, character and pattern recognition, etc., to achieve the effect of reducing the impact

Active Publication Date: 2019-08-16
SHANGHAI UNIV
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to technical limitations such as expression capture and network transmission, building a virtual social system with expression capture function brings great challenges

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual social contact method based on Avatar expression transplantation
  • Virtual social contact method based on Avatar expression transplantation
  • Virtual social contact method based on Avatar expression transplantation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] see Figure 1 to Figure 4 , based on the virtual social method of Avatar expression transplantation, it is characterized in that, the specific steps are as follows:

[0031] Step 1. Use SDM to extract face feature points from the real-time input video stream:

[0032] The supervised descent method SDM that minimizes the nonlinear least squares function is used to extract face feature points in real time, that is, the direction of descent that minimizes the average value of the NLS function of different sampling points during training; in the test phase, through OpenCV face Detect and select the region of interest of the face and initialize the average 2D shape model, so the solution to the face alignment problem becomes finding the gradient direction step size, so the direction of learning descent is used to minimize NLS, thereby realizing real-time 2D face features point extraction;

[0033] Step 2. Facial semantic features are used as the input of the DDE model trai...

Embodiment 2

[0046] This embodiment is basically the same as Embodiment 1, especially in that:

[0047] 1. The first step uses SDM to extract face feature points from the real-time input video stream, and learns a series of descending directions and scales in this direction from the public image set, so that the objective function converges at a very fast speed to the minimum value, thereby avoiding the problem of solving the Jacobian matrix and the Hessian matrix.

[0048] 2. according to the described virtual social method based on Avatar expression transplantation of claim 1, it is characterized in that: in the described step 2, utilize the DDE model of CPR training, obtain the method for expression coefficient and head motion parameter: Blendshape expression model passes through base posture The linear combination of facial expressions can be used to replay facial expressions. The given facial expressions of different people correspond to a similar set of basic weights, which can easil...

Embodiment 3

[0052] A virtual social method based on Avatar expression transplantation, see figure 1 , the main steps are: use SDM to extract face feature points from the real-time input video stream; 2D facial semantic features are used as the input of the DDE model trained by CPR, and the output expression coefficients and head motion parameters are transplanted to Avatar; the DDE model output expression coefficients for expression encoding grouping and emotion classification; through the network transmission strategy to realize the synchronization of expression animation audio data, such as figure 2 shown.

[0053] 1. Use SDM to extract face feature points from the real-time input video stream:

[0054] The supervised descent method SDM that minimizes the nonlinear least squares function is used to extract face feature points in real time, that is, the direction of descent to minimize the average value of the NLS function of different sampling points is learned during training, and th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a virtual social contact method based on Avatar expression transplantation. The virtual social contact method comprises the following specific operation steps: 1, extracting face feature points from a video stream input in real time by using an SDM (supervised descent method); 2, taking the facial semantic features as the input of a DDE (Displacement Dynamic Expression) model trained by CPR (Cascaded Pose Regression), and transplanting the output expression coefficients and head motion parameters to an Avatar (Virtual Body); 3, performing expression code grouping and emotion classification on the expression coefficients output by the DDE model; and 4, realizing expression animation audio synchronization through a network transmission strategy. According to the virtual social contact method based on Avatar expression transplantation, the facial expression of the user can be captured in real time; the expression replay can be carried out on the Avatar; and the virtual social contact of the network communication technology can be established.

Description

technical field [0001] The invention relates to the technical fields of computer vision, computer graphics, human face animation and network communication, in particular to a virtual social method based on Avatar expression transplantation, which can capture user facial expressions in real time and perform expression replay on Avatar, and build network communication Technological virtual socialization. Background technique [0002] At present, virtual social systems have sprung up like mushrooms in the market, and the business ideas are also different, mainly divided into three types: tool type, UGC type and full experience type. Among the tool types, the mobile virtual social network platform VTime is the most representative, through VR helmet access, head movement to realize interactive control of the human-machine interface, navigation of the virtual world, and voice communication, but the virtual character image it provides is relatively It is fixed and supports relativ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06Q50/00
CPCG06Q50/01G06V40/176G06V40/171G06V40/174
Inventor 黄东晋姚院秋肖帆蒋晨凤李贺娟丁友东
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products