Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real Sense-based facial expression animation driving method

A technology of facial expressions and driving methods, which is applied in animation production, image data processing, instruments, etc., can solve problems such as low efficiency, complex processing, and poor precision, so as to improve robustness, improve real-time performance, and reduce conversion gray The effect of the degree diagram on the process

Inactive Publication Date: 2018-04-06
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The object of the present invention is: the present invention provides a kind of driving method of human facial expression animation based on RealSense, solves the problem that existing performance-driven human facial expression virtual display technology cannot be applied to complex backgrounds due to the use of ordinary cameras to achieve capture and feature extraction. Poor accuracy, low efficiency, and complex problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real Sense-based facial expression animation driving method
  • Real Sense-based facial expression animation driving method
  • Real Sense-based facial expression animation driving method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] A method for driving facial expression animation based on RealSense, comprising the steps of:

[0042] Step 1: Obtain the depth image and preprocess it to obtain the depth information grayscale image;

[0043] Step 2: Extract the face feature set and non-face feature set samples based on the depth information grayscale image to obtain the depth information training set, and extract the Haar-like feature set from it to perform Adaboost training to obtain a cascaded face classifier, and according to It tracks the position of the face;

[0044] Step 3: Map the tracked face position to the obtained color image face position, and extract face features through an algorithm;

[0045] Step 4: After processing the face features and the face position of the color image, match it with the face model AU to complete the expression animation drive.

Embodiment 2

[0047] Step 1: Obtain the depth image and preprocess it to obtain the depth information grayscale image;

[0048] Initialize the RealSense parameters, start the device to obtain 640*480 color images and depth information images, where the depth value d of the depth image ranges from 0.2 meters to 2 meters, set a reasonable depth threshold range according to the depth information, remove background noise, and output a rectangular target of pixels to be detected Area I, where the coordinates of the upper left vertex of the rectangular area are (x0, y0) and the length of the rectangle is recorded as width0, and the width value is recorded as height0; the depth image is converted into a depth grayscale image, and the pixel point p is converted to p=255-0.255* (d-200), the grayscale image is normalized;

[0049]Step 2: Extract the face feature set and non-face feature set samples based on the depth information grayscale image to obtain the depth information training set, and extrac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a Real Sense-based facial expression animation driving method, and relates to the field of computer graphics. The method comprises the following steps of: 1) obtaining a depthimage and preprocessing the depth image to obtain a depth information greyscale map; 2) extracting samples on the basis of the depth information greyscale map to obtain a depth information training set, extracting a Haar-like feature set from the depth information training set, carrying out Adaboost training to obtain a cascade face classifier and tracking a face position; 3) mapping the tracked face position to a color image to obtain a color image face position, and extracting face features through an algorithm; and 4) processing the face features and the color image face position, and matching the face features and the color image face position with a face model AU so as to complete expression animation driving. According to the method, the problems of low precision, low efficiency andcomplicated processing as existing virtual facial expression display technology for performing driving adopts a common camera to realize capture and feature extraction and is not suitable for complicated backgrounds are solved, and the effect of improving the facial expression animation driving efficiency, correctness and robustness is achieved.

Description

technical field [0001] The invention relates to the field of computer graphics, in particular to a RealSense-based human facial expression animation driving method. Background technique [0002] Computer facial expression animation technology includes face modeling technology and animation technology that simulates real human faces, and realistic computer face display is one of the most basic problems in the field of computer graphics research. Due to the physiological structure of human faces Complexity and people's high demands on the detail changes of human faces, it becomes one of the most difficult and challenging problems, and its application is wide: the entertainment industry such as movies and computer games is the main driving force of computer facial animation, It can not only produce various virtual characters in the virtual reality environment, but also be applied to the production and transmission of multimedia such as virtual host, videophone, remote network c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T13/40G06K9/00G06T7/246G06K9/62
CPCG06T7/246G06T13/40G06T2207/10028G06T2207/30201G06V40/172G06V40/168G06F18/22
Inventor 蒋泉储海威王子君于军胜
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products