Multi-modal sentiment classification method based on heterogeneous fusion network

A technology that integrates network and emotion classification, applied in biological neural network models, neural learning methods, text database clustering/classification, etc. Accuracy is not high and other problems, to achieve the effect of improving accuracy

Active Publication Date: 2021-08-13
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to solve the problem that the existing multimodal emotion classification method has a single fusion method and it is difficult to mine the hidden correlation features of multimodal data, resulting in low accuracy of multimodal emotion classification. A multi-modal emotion classification method based on quality fusion network. This method extracts three modal data of text, picture and audio from videos posted by network users, and uses a heterogeneous fusion network model based on deep learning to recognize text, picture and audio respectively. and the sentiment category of the overall video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal sentiment classification method based on heterogeneous fusion network
  • Multi-modal sentiment classification method based on heterogeneous fusion network
  • Multi-modal sentiment classification method based on heterogeneous fusion network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0098] This embodiment describes the process of adopting a heterogeneous fusion network-based multimodal emotion classification method according to the present invention, such as figure 1 shown. The input data comes from the video emotion classification data set CMU-MOSI. The emotional label of the data set is represented by elements in {-3,-2,-1,0,1,2,3}, and there are 7 types in total, among which- 3, -2 and -1 are negative, 0, 1, 2 and 3 are non-negative. The input data includes complete video and video clips, all of which are extracted into three modal data of text, picture and audio.

[0099] First, a heterogeneous fusion network model based on deep learning is proposed. The heterogeneous fusion network model uses different forms, different strategies, and different angles to achieve data fusion. Two fusion forms of fusion between different modal data, two fusion strategies using feature layer fusion and decision layer fusion, and multi-modal global feature vectors cons...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal sentiment classification method based on a heterogeneous fusion network, and belongs to the technical field of opinion mining and sentiment analysis. The method comprises the steps of 1) preprocessing video data; 2) constructing a text feature vector and identifying a text emotion category; 3) constructing a picture feature vector and identifying a picture emotion category; 4) constructing an audio feature vector and identifying an audio emotion category; 5) constructing a multi-modal global feature vector and identifying a multi-modal global emotion category; 6) constructing a multi-modal local feature vector and identifying a multi-modal local emotion category; and 7) adopting a voting strategy to obtain a final sentiment classification result. The heterogeneous fusion network adopts two fusion forms of intra-modal fusion and inter-modal fusion, two fusion angles of macroscopic fusion and microscopic fusion, and two fusion strategies of feature layer fusion and decision-making layer fusion. According to the method, the implied associated information among the multi-modal data can be deeply mined, and mutual complementation and fusion among the multi-modal data are realized, so that the accuracy of multi-modal sentiment classification is improved.

Description

technical field [0001] The invention relates to a multimodal emotion classification method based on a heterogeneous fusion network, and belongs to the technical field of opinion mining and emotion analysis. Background technique [0002] Multimodal sentiment classification is an important research topic in the fields of social computing and big data mining. Multimodal sentiment classification refers to identifying the emotional polarity of network users based on various modal data such as text, pictures, and videos of network user comments. Emotional polarity includes both negative and non-negative categories. [0003] Multimodal sentiment classification methods include multimodal sentiment classification methods based on feature layer fusion and multimodal sentiment classification methods based on decision layer fusion. [0004] The multi-modal emotion classification method based on feature layer fusion first constructs the feature vectors of various modal data, and then f...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06F16/35G10L25/63G06N3/04G06N3/08
CPCG06F16/353G06F16/355G10L25/63G06N3/08G06N3/047G06N3/048G06N3/044G06N3/045G06F18/2415G06F18/259G06F18/256G06F18/253
Inventor 张春霞高佳萌彭成赵嘉旌薛晓军牛振东
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products