Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Sample screening and expression recognition method, neural network, equipment and storage medium

A screening method and neural network technology, applied in the computer field, can solve the problem that the accuracy of expression recognition cannot be effectively improved, and achieve the effect of improving the accuracy of recognition

Active Publication Date: 2019-12-03
SHENZHEN UNIV
View PDF9 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a sample screening and expression recognition method, neural network, equipment and storage medium, aiming at solving the problem existing in the prior art that the accuracy of expression recognition cannot be effectively improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sample screening and expression recognition method, neural network, equipment and storage medium
  • Sample screening and expression recognition method, neural network, equipment and storage medium
  • Sample screening and expression recognition method, neural network, equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] figure 1 The implementation process of the sample screening method provided by the first embodiment of the present invention is shown. For the convenience of description, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

[0030] In step S101, a multi-group sample is obtained, and the multi-group sample includes: an anchor sample, a positive sample and a negative sample.

[0031] In this embodiment, the multi-tuple samples may be triple samples, quadruple samples, and the like. Taking triplet samples as an example, triplet samples include: anchor samples (also called fixed samples or benchmark samples), positive samples similar to anchor samples, and negative samples not similar to anchor samples. Taking the quadruple sample as an example, the quadruple sample includes: an anchor sample, a positive sample similar to the anchor sample, a first negative sample dissimilar to the anchor sample, and a second negativ...

Embodiment 2

[0039] On the basis of Embodiment 1, this embodiment further provides the following content:

[0040] In this embodiment, the first sample distance and the second sample distance are random sample distances and obey the normal distribution, and according to the distribution statistical characteristics of the first sample distance and the second sample distance, construct Boundary conditions for screening the multigroup samples, specifically including as figure 2 The flow shown:

[0041] In step S201, using the modulus of the multigroup sample in the fully connected layer in the neural network, and the feature dimension of the multigroup sample in the fully connected layer, determine the mean and standard deviation of the normal distribution;

[0042] In step S202, using the mean value, the standard deviation, the first significance level corresponding to the positive sample, and the second significance level corresponding to the negative sample, construct a first A sub-bounda...

Embodiment 3

[0047] This embodiment provides a method for facial expression recognition, including:

[0048] The image to be recognized is processed by using the neural network obtained through the training of the sample selection method in the above-mentioned embodiments, and the facial expression recognition result is obtained.

[0049] In order to further improve the accuracy of calculation processing, in other embodiments, the expression recognition method further includes: performing side face screening on the multi-group samples, and / or performing occlusion screening on the multi-group samples. Through side face screening and / or occlusion screening, the retention results obtained from the screening can be further processed by the above-mentioned sample screening method.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention is applicable to the technical field of computers, and provides a sample screening and expression recognition method, a neural network, equipment and a storage medium. The method comprises the following steps: utilizing a multi-component sample; training neural networks, scooter for each iteration, determining a first sample distance between the anchor sample and the positive sampleand a second sample distance between the anchor sample and the negative sample, according to the distribution statistical characteristics of the sample distances; constructing a boundary condition forscreening the multi-tuple sample; the boundary condition being used for screening the multi-element group samples, and the reserved result obtained by screening entering the training of the next iterative step, so that the abnormal multi-element group samples can be screened out in the training process of the neural network, the influence of the abnormal multi-element group samples on the training result of the neural network is avoided, and the expression classification and recognition accuracy is improved.

Description

technical field [0001] The invention belongs to the technical field of computers, and in particular relates to a sample screening and expression recognition method, a neural network, equipment and a storage medium. Background technique [0002] With the development of human-computer interaction, facial expression recognition has become a hot topic in recent decades. Today, deep learning neural networks use complex structures or multiple processing layers composed of multiple nonlinear transformations to abstract data at a high level and apply it to end-to-end image recognition and analysis. Expression recognition based on deep learning has surpassed traditional methods in various expression databases, and the effectiveness of training features has been improved through various network designs and algorithms such as data enhancement, metric learning, and network composition to improve its generalization recognition ability. [0003] In the facial expression recognition algor...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/08
CPCG06N3/08G06V40/174G06F18/214
Inventor 解为成田怡沈琳琳
Owner SHENZHEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products