Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Domain-self-adaptive facial expression analysis method

A technology of facial expression and analysis method, applied in the field of computer vision and emotional computing research, can solve problems such as hindering prediction accuracy, differences in training and testing data fields, and achieve the effect of expanding the scope of practical application and good robustness

Inactive Publication Date: 2015-05-13
南京宜开数据分析技术有限公司
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to solve the problem that the difference between training and test data domains in expression analysis hinders the prediction accuracy, so that the expression analysis system is more suitable for the actual application environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Domain-self-adaptive facial expression analysis method
  • Domain-self-adaptive facial expression analysis method
  • Domain-self-adaptive facial expression analysis method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0012] The invention is an automatic human facial expression analysis method with domain self-adaptive capability. The present invention takes the facial action unit (Action Unit, AU) defined in FACS as the target of expression analysis. AU is a set of action units defined on facial muscle movements. For example, AU 12 means that the corners of the mouth are raised, which is basically equivalent to the action of "laughing" semantically. On the basis of making full use of the correlation and complementarity between the two types of face image features, the method proposed by the present invention can automatically analyze the video of the test object, and give a label of whether a specific AU appears in each frame.

[0013] Using existing techniques, we can detect face landmarks. We use SDM (Supervised Descent Machine) technology to detect face feature points in each frame of face video. The detection results of facial feature points show opinions figure 1 .

[0014] In ad...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a domain-self-adaptive facial expression analysis method and belongs to the field of computer vision and emotion computing research. The method aims to solve the problem that the prediction precision is hindered by training and test data domain differences in automatic expression analysis, and is more aligned with the actual needs. The invention provides the domain-adaptive expression analysis method based on an object domain. The method comprises the following steps: defining a data domain for each tested object; defining the distance between object domains in a way of establishing an auxiliary prediction problem; selecting a group of objects similar to the data character of the tested object from a source data set to form a training set; on the training set, directly using part of tested object data in model training in a way of weighted cooperative training, and thus enabling a prediction model to be closer to the tested object domain. The method has the advantages that the isolation problem of training and testing data is solved, and the prediction model is adaptive to the testing data domain; the method has robustness for the domain differences and is wide in range of application.

Description

technical field [0001] The invention belongs to the research field of computer vision and emotional computing, in particular to an automatic facial expression analysis method. Background technique [0002] Automatic facial expression analysis is a long-standing research problem in computer vision. The goal of mainstream automatic expression analysis is to extract a series of facial action units with semantic-level information from images or videos. Usually the definition in the FACS manual is used. FACS (Facial Action Coding System) is a subdivision and labeling system proposed by behavioral psychologists to study facial expressions. The FACS system decomposes facial movements into a series of expression unit (Action Unit, AU). Each action unit is associated with one or more facial muscle movements. [0003] Most of the current expression analysis research assumes that the training (source) data and test (target) data come from the same data distribution. The following ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/176G06F18/2411
Inventor 丁小羽王桥夏睿
Owner 南京宜开数据分析技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products