Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual person-based multi-mode interactive processing method and system

A virtual human and multi-modal technology, applied in the field of human-computer interaction, can solve the problems of being unable to realize lifelike, smooth, anthropomorphic, and virtual human being unable to perform multi-modal interaction, so as to improve user experience, meet user needs, and smooth characters The effect of the interaction effect

Pending Publication Date: 2018-03-06
BEIJING GUANGNIAN WUXIAN SCI & TECH
View PDF5 Cites 44 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The virtual human in the prior art cannot perform multi-modal interaction, and always presents a fixed state, unable to achieve realistic, smooth, and anthropomorphic interaction effects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual person-based multi-mode interactive processing method and system
  • Virtual person-based multi-mode interactive processing method and system
  • Virtual person-based multi-mode interactive processing method and system

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0043] figure 1 It is a schematic diagram of the application scenario of the virtual human-based multi-modal interaction system according to the first embodiment of the present application. The virtual person A can be displayed to the user in the form of a holographic image or a display interface through the smart device equipped on it, and the virtual person A can achieve voice, facial expression, Emotional, head, and body coordination. In this embodiment, the system mainly includes a cloud brain (cloud server) 10 and a smart device 20 for multimodal interaction with users. The smart device 20 can be a traditional PC personal computer, LapTop notebook computer, etc., or a terminal device that can be carried around and can access the Internet through a wireless local area network, mobile communication network, or other wireless means. In the embodiment of the present application, wireless terminals include but are not limited to mobile phones, Netbooks (netbooks), etc., and ...

no. 2 example

[0086] In this example, the virtual person A can be displayed to the user in the form of a holographic image or a display interface through the smart device carried on it. The difference from the first embodiment is that the virtual person A can interact with the user in different scenarios. Multi-modal interaction, such as family scenes, stage scenes, playground scenes, etc.

[0087] In this example, the description of the same or similar content as the first embodiment is omitted, and the description is focused on the content different from the first embodiment. Such as Figure 5 As shown, the Cloud Brain 10 terminal has the function of obtaining scene information, and the specific operations are as follows: obtain the scene information of the current virtual person, the scene information includes application scene information and external scene information, and then in the process of decision-making multi-modal output data : Extract the scene information and use it to filt...

no. 3 example

[0092] In this example, the virtual person A can be displayed to the user in the form of a holographic image or a display interface through the smart device carried on it. The difference from the first embodiment is that the virtual person A can communicate with the user in different fields. Multi-modal interaction, such as financial fields, education fields, etc.

[0093] The field is the application field associated with the pre-set virtual person, for example, the virtual person with the image of a star, whose field can be entertainment; the virtual person with the image of a teacher, whose field can be education; the virtual person with the image of a white-collar worker, whose field can be for finance etc. These domain information are pre-set for virtual humans with different images, and when multi-modal interaction with virtual humans in different fields, the multi-modal data output by virtual humans will also match the field, for example, in an entertainment field On t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a virtual person-based multi-mode interactive processing method and system. A virtual person runs in an intelligent device. The method comprises the following steps of awakening the virtual person to enable the virtual person to be displayed in a preset display region, wherein the virtual person has specific characters and attributes; obtaining multi-mode data, wherein themulti-mode data includes data from a surrounding environment and multi-mode input data interacting with a user; calling a virtual person ability interface to analyze the multi-mode data, and decidingmulti-mode output data; matching the multi-mode output data with executive parameters of a mouth shape, a facial expression, a head action and a limb body action of the virtual person; and presentingthe executive parameters in the preset display region. When the virtual person interacts with the user, voice, facial expression, emotion, head and limb body fusion can be realized to present a vividand fluent character interaction effect, thereby meeting user demands and improving user experience.

Description

technical field [0001] The invention relates to the field of human-computer interaction, in particular to a method and system for processing multi-modal interaction based on a virtual human. Background technique [0002] With the continuous development of science and technology, the introduction of information technology, computer technology and artificial intelligence technology, the research of robots has gradually gone out of the industrial field, and gradually expanded to the fields of medical care, health care, family, entertainment and service industries. And people's requirements for robots have also been upgraded from simple and repetitive mechanical actions to intelligent robots with anthropomorphic question-and-answer, autonomy, and interaction with other robots. Human-computer interaction has become an important factor in determining the development of intelligent robots. [0003] At present, robots include physical robots with entities and virtual humans mounted ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F3/01
CPCG06F3/011
Inventor 周伟尚小维
Owner BEIJING GUANGNIAN WUXIAN SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products