Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal semantic fusion human-computer interaction system and method for virtual experiments

A virtual experiment and human-computer interaction technology, applied in the field of multi-modal semantic fusion human-computer interaction, can solve the problems of descent, single touch manipulation, high user visual channel load, and achieve the effect of stimulating learning interest

Pending Publication Date: 2020-09-15
UNIV OF JINAN
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] On the basis of the increasing development of human-computer interaction, the interactive mode of virtual experiment has gradually developed from the initial two-dimensional web page interaction to three-dimensional interactive mode. However, there are still many problems in the interactive mode of virtual experiment. For web-based virtual experiments, only two input channels are used and all of them are hand channels. The interaction load of the user's hands is too high.
Although the Pad version virtual experiment designed by NoBook simplifies the operation, it is still a single touch operation, which does not fundamentally solve this problem [2]. In addition, the two-dimensional interactive interface still has deficiencies in the sense of manipulation and the presentation of experimental results.
Most of the virtual experiments designed using virtual reality technology use handheld devices to operate virtual objects in the scene, while real experiments require learners to use both hands to conduct experiments, which leads to the inability to standardize the operator's experimental actions , learners can't perform real experiment operations, so the operating experience is degraded
In addition, the existing virtual experiments all use a single visual channel to feed back information. The operator can only obtain information through the visual channel, and the load of the user's visual channel is too high.
The single feedback channel also caused that the hand-eye inconsistency problem in the virtual experiment could not be effectively solved, resulting in a decrease in interaction efficiency
Moreover, the single feedback channel in the virtual experiment makes it impossible for learners to obtain experimental guidance in time when they make mistakes. The existing interaction methods are already difficult to meet the current interactive requirements of virtual experiments.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal semantic fusion human-computer interaction system and method for virtual experiments
  • Multi-modal semantic fusion human-computer interaction system and method for virtual experiments
  • Multi-modal semantic fusion human-computer interaction system and method for virtual experiments

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043]In order to clearly illustrate the technical features of the solution of the present invention, the solution will be further elaborated below in conjunction with the accompanying drawings and through specific implementation methods.

[0044] like figure 1 As shown in , a virtual experiment-oriented multi-modal semantic fusion human-computer interaction system includes an interactive information integration module, and also includes an interactive information acquisition module, an interactive intention reasoning module and an interactive task direct module, wherein the interactive information integration The module integrates virtual objects and experimental operation knowledge information into the virtual environment and provides a data basis for the interactive intention reasoning module, including active objects, interactive behavior knowledge rules and passive objects; the interactive information module uses a multi-modal fusion model to accurately identify The real ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-modal semantic fusion human-computer interaction system and method for virtual experiments. The system comprises an interactive information integration module, and further comprises an interactive information acquisition module, an interactive intention reasoning module and an interactive task straight-going module. The interactive information module adopts a multi-modal fusion model to accurately identify the real intention of an operator, and provides the acquired information for the interactive intention reasoning module; the interaction intention reasoningmodule is used for recognizing the interaction intention of the user according to the gesture semantics and the language semantics in combination with the current interaction scene of the scene and predicting potential interaction behaviors; and the interaction task execution module generates an experiment action expected by the user according to the interaction action predicted by the interactionintention reasoning module, generates a response experiment effect, returns corresponding operation feedback, and finally outputs the experiment effect and the feedback to the user through differentchannels. According to the invention, the problem of interaction difficulty in the current virtual experiment is solved.

Description

technical field [0001] The present invention relates to the field of virtual reality technology, in particular to a human-computer interaction method for virtual experiments, in particular to a multi-modal semantic fusion human-computer interaction method for virtual experiments. Background technique [0002] Virtual experiments use virtual reality technology and visualization technology to enhance learners' sense of immersion in the virtual environment through the visual expression of relevant theoretical knowledge and operational scenarios and the analysis of human interaction [1]. Virtual reality technology can reproduce some abstract experiments in real experiments, such as physical experiments, and those experiments that are difficult to implement because of expensive materials or certain operational risks, through virtual reality technology, so that every learner can observe Virtual experimental phenomenon, understand the real experimental principle, and turn the abstr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/01G06K9/00G10L15/18G10L15/22G10L15/26
CPCG06F3/011G06F3/017G10L15/22G10L15/1822G10L2015/223G06V40/28
Inventor 冯志全李健杨晓晖徐涛
Owner UNIV OF JINAN
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products