Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional face recognition method based on expression invariant regions

A technology of three-dimensional face and recognition method, which is applied in the field of three-dimensional face recognition based on the invariant expression area, can solve the problem of low accuracy of the invariant expression area, and achieve the effect of avoiding non-convergence and accurate area of ​​the invariant expression.

Inactive Publication Date: 2015-08-19
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF4 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current multi-modal recognition method has low accuracy when extracting expression-invariant regions, so there are still deficiencies in eliminating the influence of expression changes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional face recognition method based on expression invariant regions
  • Three-dimensional face recognition method based on expression invariant regions
  • Three-dimensional face recognition method based on expression invariant regions

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0037] figure 1 It is a flowchart of the three-dimensional face recognition method based on the expression invariant region of the present invention. Such as figure 1 As shown, the specific steps of the present invention's three-dimensional face recognition method based on the expression invariant region include:

[0038] S101: Extracting the features of the sample to be identified and the control sample:

[0039]Firstly, the features of the sample to be identified and the control sample will be extracted respectively. There are two types of features of the three-dimensional face area adopted in the present invention: statistical feature vectors and expression-invariant area point sets. For the accuracy of feature extraction, human face Before the face area detection, the sample image needs to be preprocessed, and the extracted face area needs to be corrected. figure 2 It is a flow chart of face feature extraction. Such as figure 2 As shown, the specific steps of face f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional face recognition method based on expression invariant regions. Firstly a two-dimensional face region is obtained from two-dimensional face images corresponding to three-dimensional face data through detection; an initial three-dimensional face region is extracted in the three-dimensional face data according to the two-dimensional face region, transverse slicing is performed on the initial three-dimensional face region, and a nose tip point is detected; a relatively accurate three-dimensional face region is extracted according to the nose tip point, and then statistical characteristic vectors and the expression invariant regions are extracted in the three-dimensional face region; the statistical characteristic vectors of a check sample act as the check samples of a reject classifier; and the candidate check samples are obtained according to the statistical characteristic vectors of samples to be recognized, then the set of points of the expression invariant regions of the samples to be recognized are matched with the set of points of the expression invariant regions of the candidate check samples, and a recognition result is obtained according to the matching error. Accuracy of the three-dimensional face region can be enhanced, and accuracy of three-dimensional face recognition is enhanced via combination of the statistical characteristic vectors and the expression invariant regions.

Description

technical field [0001] The invention belongs to the technical field of three-dimensional face recognition, and more specifically relates to a three-dimensional face recognition method based on an expression-invariant region. Background technique [0002] Since the 1990s, face recognition has been developed for more than 20 years. The initial focus of face recognition research focused on face recognition on two-dimensional images. After the advancement of related research, two-dimensional face recognition has been able to achieve a good recognition rate under constraints such as limited lighting angles, postures, and expressions. The application scenarios under simple conditions are satisfied. With the advancement of research, face recognition under non-ideal conditions has become a relevant research hotspot, but the recognition rate under complex conditions has not made great progress. With the development of 3D scanning technology, 3D data acquisition has become easier an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00
CPCG06V40/165G06V20/64G06V40/172
Inventor 纪禄平尹力郝德水王强卢鑫黄青君杨洁
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products