Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal emotion recognition system and method

A multi-modal, emotion recognition technology, applied in character and pattern recognition, speech recognition, acquisition/recognition of facial features, etc., can solve the problems of small amount of information, single emotion recognition method, difficult to realize human emotion recognition, etc., to achieve accurate The effect of recognition

Active Publication Date: 2020-07-10
EMOTIBOT TECH LTD
View PDF12 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] At present, emotion recognition machines usually identify human emotions by using one of text recognition technology, speech recognition technology or visual image recognition technology. This kind of emotion recognition method is single, and the amount of information used for emotion recognition is small. , it is difficult to realize human emotion recognition in complex situations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal emotion recognition system and method
  • Multi-modal emotion recognition system and method
  • Multi-modal emotion recognition system and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0022] combine figure 1 , the multimodal emotion recognition system provided in this embodiment includes: a voice receiver 1, a first emotion recognition subsystem 3, a second emotion recognition subsystem 4, a visual image receiver 2, a third emotion recognition subsystem 5, Emotional output device 6; Voice receiver 1, is used for receiving the voice signal that target object sends; Visual image receiver 2, is used for receiving the visual image data about target object; The first emotion identification subsystem 3, is used for according to voice signal Obtain the first emotion recognition result; the second emotion recognition subsystem 4 is used to obtain the second emotion recognition result according to the voice signal; the third emotion recognition subsystem 5 is used to obtain the third emotion recognition result according to the visual image data; emotion output A device 6 is configured to determine the emotional state of the target object according to the first emoti...

Embodiment 2

[0033] combine image 3 , an embodiment of the present invention provides a multi-modal emotion recognition method, including:

[0034] Step S1: the voice receiver 1 receives the voice signal sent by the target object;

[0035] Step S2: the visual image receiver 2 receives visual image data about the target object;

[0036] Step S3: the first emotion recognition subsystem 3 obtains the first emotion recognition result according to the voice signal;

[0037] Step S4: The second emotion recognition subsystem 4 obtains a second emotion recognition result according to the voice signal;

[0038] Step S5: The third emotion recognition subsystem 5 acquires a third emotion recognition result according to the visual image data;

[0039] Step S6: The emotion output unit 6 determines the emotional state of the target object according to the first emotion recognition result, the second emotion recognition result and the third emotion recognition result.

[0040] Preferably, as Figur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a multi-modal emotion recognition system and method, wherein the system includes a voice receiver, a first emotion recognition subsystem, a second emotion recognition subsystem, a visual image receiver, a third emotion recognition subsystem, an emotion Output device; voice receiver, used to receive the voice signal sent by the target object; visual image receiver, used to receive the visual image data about the target object; the first emotion recognition subsystem, used to obtain the first emotion recognition according to the voice signal Result; the second emotion recognition subsystem is used to obtain the second emotion recognition result according to the voice signal; the third emotion recognition subsystem is used to obtain the third emotion recognition result according to the visual image data; the emotion output device is used to obtain the third emotion recognition result according to the first The emotion identification result, the second emotion identification result and the third emotion identification result determine the emotional state of the target object.

Description

technical field [0001] The invention relates to computer processing technology, in particular to a multi-modal emotion recognition system and method. Background technique [0002] At present, emotion recognition machines usually identify human emotions by using one of text recognition technology, speech recognition technology or visual image recognition technology. This kind of emotion recognition method is single, and the amount of information used for emotion recognition is small. , it is difficult to realize human emotion recognition in complex situations. Contents of the invention [0003] The technical problem to be solved by the present invention is to provide a multi-modal emotion recognition system and method, which integrates text recognition technology, speech recognition technology and visual image recognition technology, and simultaneously conducts human emotion recognition from multiple channels, so that the emotion recognition machine can Accurately identify...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06F40/284G06F40/211G10L15/02G10L15/18G10L15/26
CPCG10L15/02G10L15/1807G10L15/1822G10L15/26G06F40/211G06F40/284G06V40/174G06V40/171G06V40/172G06V40/161G06V40/20G06F18/24
Inventor 简仁贤杨闵淳林志豪孙廷伟
Owner EMOTIBOT TECH LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products