Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Virtual learning environment natural interaction method based on multimode emotion recognition

A technology for learning environment and emotion recognition, applied in the field of natural interaction in virtual learning environment, can solve problems such as difficulty in accurately conveying students' true emotions and lack of effective research, and achieve high practicability and fun, naturalness, and good motion recognition Effect

Inactive Publication Date: 2017-07-04
CHONGQING UNIV OF POSTS & TELECOMM
View PDF3 Cites 77 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In a virtual learning environment, it is difficult to accurately convey the true emotions of students only by using single-modal emotion recognition information such as human expressions, voice, or gestures.
However, there is still a lack of effective research at home and abroad on how to build a multi-modal emotion recognition method based on facial expressions, voice, and gestures and its natural interaction in a virtual learning environment, and there is no patent application for this aspect.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual learning environment natural interaction method based on multimode emotion recognition
  • Virtual learning environment natural interaction method based on multimode emotion recognition
  • Virtual learning environment natural interaction method based on multimode emotion recognition

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0038] figure 1 It is a flow chart of natural interaction in a virtual learning environment based on multi-modal emotion recognition proposed by the present invention. It is a natural interaction in a virtual learning environment based on multi-modal emotion recognition. Emotions are extracted, classified and recognized, and then the three kinds of emotion recognition results are fused through the quadrature rule algorithm, and the fusion results are used to drive the decision-making module of the virtual teacher in the virtual learning environment, select the corresponding teaching strategy and behavior action, and generate a virtual agent Emotions such as facial expressions, voice and gestures are displayed in the virtual learning environment. The specific implementation is as follows:

[0039] Step 1: Obtain color image information, depth information, voice signal and skeleton information representing the student's expression, voice and posture.

[0040] Step 101: The pre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a virtual learning environment natural interaction method based on multimode emotion recognition. The method comprises the steps that expression information, posture information and voice information representing the learning state of a student are acquired, and multimode emotion features based on a color image, deep information, a voice signal and skeleton information are constructed; facial detection, preprocessing and feature extraction are performed on the color image and a depth image, and a support vector machine (SVM) and an AdaBoost method are combined to perform facial expression classification; preprocessing and emotion feature extraction are performed on voice emotion information, and a hidden Markov model is utilized to recognize a voice emotion; regularization processing is performed on the skeleton information to obtain human body posture representation vectors, and a multi-class support vector machine (SVM) is used for performing posture emotion classification; and a quadrature rule fusion algorithm is constructed for recognition results of the three emotions to perform fusion on a decision-making layer, and emotion performance such as the expression, voice and posture of a virtual intelligent body is generated according to the fusion result.

Description

technical field [0001] The invention relates to the fields of emotion recognition, multimodality, human-computer interaction technology, virtual reality, education, etc., and in particular to a natural interaction method for a virtual learning environment based on multimodal emotion recognition. Background technique [0002] The virtual learning environment is an organic combination of virtual reality technology and classroom teaching. By constructing classroom teaching scenes, teaching strategies, teaching content, etc. in a virtual environment, it can break the limitations of time, space, and teaching resources, allowing students to "immerse themselves in the scene" Experience the process of various teaching experiments, strengthen the understanding of various principles, concepts, and methods, and enhance students' learning interest and effect. [0003] The establishment of a virtual learning environment is an integrated and comprehensive technology, involving virtual rea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/01G10L15/22G10L25/63
CPCG06F3/011G10L15/22G10L25/63
Inventor 蔡林沁陈双双徐宏博虞继敏杨洋
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products