Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal emotion recognition method and system

An emotion recognition and multi-modal technology, applied in the field of emotion recognition, can solve the problems of not considering the emotion distinguishability of facial expression images, low emotion recognizability of expression images, poor model performance, etc., to achieve effective feature learning, The effect of performance improvement and small amount of calculation

Active Publication Date: 2021-07-06
UNIV OF JINAN +1
View PDF7 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Under the slow-changing characteristics of facial expressions, this non-discrimination and non-filtering method of facial expression image collection ignores the connection between different modal emotional expressions, and does not consider the emotional differentiation of facial expression images, resulting in the collected The emotional recognizability of facial expression images is low and the redundancy is large, which leads to the poor performance of the model trained and learned in the follow-up emotion recognition research.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal emotion recognition method and system
  • Multi-modal emotion recognition method and system
  • Multi-modal emotion recognition method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] The purpose of this embodiment is to provide a multi-modal emotion recognition method.

[0058] A method for multimodal emotion recognition, comprising:

[0059] Extract the emotional speech component and the emotional image component in the emotional video, and store them separately;

[0060] Using the emotional speech residual conditional entropy difference endpoint detection method to carry out endpoint detection to the emotional speech component, obtain the endpoint detection result of each frame of speech;

[0061] Screen the emotional image in the emotional image component based on the endpoint detection result of the emotional speech component, and remove the emotional image of the silent segment in the emotional image component;

[0062] Feature extraction is performed on the reconstructed emotional speech component and the filtered emotional image component respectively;

[0063] Fusion is carried out to the feature of emotional voice component and the featur...

Embodiment 2

[0101] The purpose of this embodiment is to provide a multi-modal emotion recognition system.

[0102] A multimodal emotion recognition system, comprising:

[0103] A data acquisition module, which is used to extract the emotional speech component and the emotional image component in the emotional video, and store them respectively;

[0104] An endpoint detection module, which is used to detect the endpoint of the emotional speech component using the emotional speech residual conditional entropy difference endpoint detection method, and obtain the endpoint detection result of each frame of speech;

[0105] An image screening module, which is used to screen the emotional image in the emotional image component based on the endpoint detection result of the emotional voice component, and remove the emotional image of the silent segment in the emotional image component;

[0106] A feature extraction module, which is used to extract the features of the reconstructed emotional speec...

Embodiment 3

[0110] In this embodiment, a method for detecting the working status of customer service personnel in a call center is provided, and the detection method utilizes the above-mentioned multi-modal emotion recognition method.

[0111] When dealing with problems, customer service personnel need to communicate with customers and answer all kinds of questions from customers non-stop. This kind of work has the characteristics of cumbersome content and high pressure. At the same time, the attitude of customers is not friendly in some cases. In the working environment, customer service personnel will have certain negative emotions, and if the customer service personnel have certain negative emotions such as disgust or anger, it will seriously affect the service quality, and it is also very detrimental to the mental health of the customer service personnel themselves. However, the multi-modal emotion recognition method proposed in the present disclosure can be effectively applied to the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-modal emotion recognition method and system, and the method comprises the steps: employing a novel and robust endpoint detection algorithm for voice components in an emotion video sample, and employing a prediction residual conditional entropy parameter generated in a sample reconstruction process under a compressed sensing theory, calculating a residual conditional entropy difference value in the algorithm iteration process of an orthogonal matching pursuit (OMP) algorithm, completing endpoint detection according to an empirical threshold value, and completing feature learning of emotional speech of a voiced segment based on a reconstructed sample; and screening the facial expression images according to the endpoint detection result of the emotional voice, and only reserving the facial expression images with the active emotional voice in the same time period, so that the purposes of enhancing the emotional separability of the facial expression data set and reducing the redundancy are achieved; the emotion speech features and the facial expression features are fused, an effective multi-modal emotion recognition model is trained, and the purpose of effective multi-modal emotion recognition is achieved.

Description

technical field [0001] The disclosure belongs to the technical field of emotion recognition, and in particular relates to a multi-modal emotion recognition method and system. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] Emotion recognition is a research hotspot in the field of affective computing. The emotional signals of the two modalities of emotional speech and facial expression images have the characteristics of convenient collection and large amount of emotional information. They are important and highly relevant in the research of emotion recognition. Two types of data sources. [0004] The inventors found that the emotional voice data and facial expression images currently used in the field of multi-emotion recognition are usually obtained by storing the emotional voice components and image components in emotional video samples...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/08G06N20/10G10L15/05
CPCG06N3/08G06N20/10G10L15/05G06V40/174G06V40/168
Inventor 姜晓庆陈贞翔杨倩郑永强
Owner UNIV OF JINAN
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products