Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal emotion recognition method based on acoustic and text features

A technology of emotion recognition and acoustic features, applied in character and pattern recognition, neural learning methods, biological neural network models, etc., can solve the problems of not taking into account the mismatch of data volume, unable to make up for the transcribed text, etc., to speed up the convergence speed, The effect of correcting ambiguity and improving utilization

Pending Publication Date: 2022-05-06
XUZHOU NORMAL UNIVERSITY
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although BERT can be used to obtain contextual word embeddings to represent the information contained in the transcribed text, it does not take into account the mismatch between the complex network structure of BERT and the insufficient amount of data in the emotional corpus
Although BERT can be used to generate representations of text information, it cannot make up for the lack of potential emotional information that the transcribed text itself ignores.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal emotion recognition method based on acoustic and text features
  • Multi-modal emotion recognition method based on acoustic and text features
  • Multi-modal emotion recognition method based on acoustic and text features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] In order to explain the present invention more fully, the present invention will be described in detail below in conjunction with the drawings and specific embodiments.

[0054] Such as figure 1 As shown, the multimodal emotion recognition method based on acoustic and text features of the present invention uses OpenSMILE to extract the emotional shallow features of the input voice, and fuses with the deep features obtained after the Transformer network learns the shallow features to generate a multi-level acoustic Features; use the speech with the same content to perform forced alignment with the transcribed text to obtain pause information, then encode the speech pause information in the speech and add it to the transcribed text, and send it to the layered dense connection DC-BERT model to obtain the features of this article, and then combine with Acoustic feature fusion; use BiLSTM-ATT, a two-way long-short-term memory neural network based on attention mechanism, as a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-modal emotion recognition method based on acoustic and text features, which is suitable for extracting speech and text emotion features. The method comprises the following steps: extracting emotional shallow-layer features of input voice by using OpenSMILE, and fusing the emotional shallow-layer features with deep-layer features obtained by learning the shallow-layer features through a Transform network to generate multi-level acoustic features; performing forced alignment on the voice and the transcriptional text to obtain pause information, encoding the speaking pause information in the voice, adding the encoded speaking pause information to the transcriptional text, sending the encoded speaking pause information to a hierarchical dense connection DC-BERT model to obtain text features, and fusing the text features with acoustic features; according to the method, effective context information is acquired by utilizing priori knowledge through a BiLSTM network, a part which highlights emotion information in features is extracted through an attention mechanism to avoid information redundancy, a global average pooling layer is added behind the attention mechanism to replace a traditionally used full connection layer, and finally the information is sent to a softmax layer for emotion classification. The method has the advantages of simple steps, accurate identification and wide practical value.

Description

technical field [0001] The invention relates to a multimodal emotion recognition method based on acoustic and text features, which is suitable for the extraction of voice and text emotion features, and belongs to the technical fields of artificial intelligence and voice emotion recognition. Background technique [0002] With the development of technology, speech emotion recognition and natural language processing have made great progress, but humans still cannot communicate with machines naturally. Therefore, it is crucial to build a system capable of detecting emotion in human-computer interaction. But it is still a challenging task due to the variability and complexity of human emotions. [0003] Traditional emotion recognition is mainly aimed at a single modality, such as text, voice, image, etc., and there are certain limitations in recognition performance. For example, in the early speech emotion recognition tasks, researchers mainly used the acoustic features and som...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L25/63G10L25/30G10L25/03G10L25/24G06F40/30G06K9/62G06N3/04G06N3/08
CPCG10L25/63G10L25/30G10L25/24G10L25/03G06F40/30G06N3/08G06N3/044G06N3/045G06F18/2415
Inventor 金赟顾煜俞佳佳
Owner XUZHOU NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products