Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A sentiment classification method and system based on multimodal contextual semantic features

A semantic feature and emotion classification technology, applied in the field of emotional computing, can solve the problems of ignoring context dependencies and not considering context information, so as to achieve the effect of improving generalization ability, improving accuracy, and increasing feature information

Active Publication Date: 2022-07-26
NANJING UNIV OF POSTS & TELECOMM
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing methods try to learn hidden associations between multiple modalities at different stages, or make emotional predictions based on the information of different modalities before performing vote fusion, which solves related problems to a certain extent and improves multiple modalities. Modal sentiment classification performance, but most of them ignore the context dependence in each modality information, and do not consider the context information of each utterance in the video, there are still places to be improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A sentiment classification method and system based on multimodal contextual semantic features
  • A sentiment classification method and system based on multimodal contextual semantic features
  • A sentiment classification method and system based on multimodal contextual semantic features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0088] The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments of the description.

[0089] like figure 1 As shown, a sentiment classification method based on multimodal contextual semantic features provided by an embodiment of the present invention mainly includes the following steps:

[0090] Step (1) Data preprocessing and characterization feature extraction: The short video is divided into the same number of semantic units (usually can be divided into 12≤N≤60 semantic units according to the length of the video) using the utterance as a unit, and each semantic unit is used as a The corresponding video samples, speech samples and text samples are generated from the semantic units, and three kinds of representation features, namely, the expression feature vector, the spectrogram and the sentence vector, are correspondingly extracted from the three types of samples.

[0091...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an emotion classification method and system based on multimodal context semantic features. The method includes: dividing a short video into the same number of semantic units by taking utterances as a unit, generating corresponding video, speech and text samples, and extracting three representational features of expression features, spectrograms and sentence vectors; The representation features are respectively input into the expression, speech, and text emotion feature encoders, and the corresponding emotional semantic features are extracted; the contextual relationship between the expression, speech, and text emotion semantic features is used to construct the corresponding adjacency matrix; The semantic features and the corresponding adjacency matrix are input to the corresponding graph convolutional neural network, the corresponding contextual emotional semantic features are extracted, and the multimodal emotional features are obtained by fusion, which is used for the classification and recognition of emotions. The present invention makes better use of the contextual relationship of emotional semantic features through the graph convolutional neural network, and can effectively improve the accuracy of emotional classification.

Description

technical field [0001] The invention belongs to the field of emotion computing, and in particular relates to an emotion classification method and system based on multimodal context semantic features. Background technique [0002] In people's daily communication, emotion is an important bridge for mutual understanding between people. The perception and understanding of emotion can help people understand each other's various behaviors and psychological states. Facial expressions and speech are important ways for people to express their emotions. The emotional research on these single modalities has become increasingly mature and has been applied in people's lives. However, with the deepening of the research, the researchers found that since the emotional information expressed by a single modality is incomplete, there are certain limitations in single-modal sentiment analysis. Therefore, more and more researchers turn their attention to emotion classification research based on...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/40G06N3/04G06N3/08
CPCG06N3/04G06N3/049G06N3/08G06V20/49G06V20/41
Inventor 卢官明奚晨卢峻禾
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products