Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Speech Emotion Recognition Model and Recognition Method Based on Joint Feature Representation

A speech emotion recognition and joint feature technology, applied in speech analysis, instruments, etc., can solve the problems of low emotion recognition performance, poor speech emotion modeling ability, and insufficient use of the complementarity of different features, so as to enhance the description ability. , improve the generalization performance, reduce the effect of parameter redundancy

Active Publication Date: 2020-06-16
PEKING UNIV SHENZHEN GRADUATE SCHOOL
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

So far, neural network-based speech emotion recognition methods only learn emotional deep features from a single feature (such as spectral or hand-crafted features)
However, speech contains complex information, and various features can be extracted. Existing methods do not make full use of the complementarity between different features, which makes the modeling ability of speech emotion poor, resulting in relatively poor performance of emotion recognition. high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Speech Emotion Recognition Model and Recognition Method Based on Joint Feature Representation
  • A Speech Emotion Recognition Model and Recognition Method Based on Joint Feature Representation
  • A Speech Emotion Recognition Model and Recognition Method Based on Joint Feature Representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] Below in conjunction with the accompanying drawings, the present invention is further described by means of embodiments, but the scope of the present invention is not limited in any way.

[0058] The present invention provides a speech emotion recognition method based on joint feature representation, the method flow is as follows figure 1 As shown, the convolutional cyclic neural network is improved. By fusing the deep features and manual features learned by the convolutional cyclic neural network from the spectrum, the two are mapped to the same feature space through the hidden layer for classification, making full use of the speech in speech. The emotional information carried can model the speech emotion more effectively, thereby improving the accuracy of speech emotion recognition.

[0059] image 3 A structural block diagram of a joint feature representation-based speech emotion recognition model provided for implementing the present invention according to an examp...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a voice emotion recognition model and method based on joint feature representation, and relates to voice emotion recognition technology. A convolutional recurrent neural network model is improved, a hidden layer in the neural network is configured to learn the joint feature representation of a spectral depth feature and a manual feature, and the joint feature extraction andsentiment classification are integrated into an end-to-end network model. The joint feature utilizes the complementarity between the spectral depth feature and the manual feature, makes full use of the emotional information carried in the voice, and more perfectly models the voice emotion. In addition, the end-to-end network model reduces parameter redundancy due to an intermediate output layer.The voice emotion recognition method based on joint feature representation improves the recognition accuracy of the voice emotions compared with an original voice emotion recognition method based on apure convolutional recurrent neural network.

Description

technical field [0001] The invention relates to speech emotion recognition technology, in particular to a speech emotion recognition model (HSF-CRNN) construction based on joint feature representation convolutional neural network (HSF-CRNN) and a speech emotion recognition method. Background technique [0002] Emotion recognition helps to provide a humanized experience for human-computer interaction, enabling the computer to perceive and analyze the user's emotional state, and then generate a corresponding response. It is an important capability that computers must have in the future. Among them, speech is the basic way of human communication, and speech emotion recognition is particularly important. Speech emotion recognition is the process of marking the emotion type of a given speech segment. Specifically, its task is to extract the acoustic features that can express emotions from the collected speech signals, and then map these features to a certain type of emotion. [...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L25/63G10L25/30G10L25/24
CPCG10L25/24G10L25/30G10L25/63
Inventor 邹月娴罗丹青
Owner PEKING UNIV SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products