Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method of driving expression and gesture of character model in real time based on voice

A driving module and voice technology, applied in the field of virtual reality, can solve the problems of high cost, time-consuming and laborious, and achieve the effect of low cost, cost saving, time and labor.

Active Publication Date: 2017-03-08
北京五一视界数字孪生科技股份有限公司
View PDF6 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there is no robot on the market that can automatically make facial expressions and gestures through voice drive when speaking in virtual reality.
Therefore, when the virtual character wants to speak, professionals are required to make images of the virtual reality character, which is not only costly, but also time-consuming and labor-intensive.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method of driving expression and gesture of character model in real time based on voice
  • Method of driving expression and gesture of character model in real time based on voice
  • Method of driving expression and gesture of character model in real time based on voice

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Hereinafter, embodiments of the method for driving the expressions and gestures of a character model in real time based on speech of the present invention will be described with reference to the accompanying drawings.

[0025] The examples described here are specific specific implementations of the present invention, and are used to illustrate the concept of the present invention. They are all explanatory and exemplary, and should not be construed as limiting the implementation of the present invention and the scope of the present invention. In addition to the embodiments described here, those skilled in the art can also adopt other obvious technical solutions based on the claims of the application and the content disclosed in the description, and these technical solutions include making any obvious replacements for the embodiments described here. and modified technical solutions.

[0026] The accompanying drawings in this specification are schematic diagrams, which ass...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method of driving an expression and a gesture of a character model in real time based on voice for driving the expression and the gesture of a speaking virtual reality character model. The method comprises steps: voice data are acquired; the weight values of basic animations are calculated; the weight values of decorative animations are calculated; the weight values of basic mouth shape animations are calculated; a synthesized animation is modified; and a facial expression grid is outputted. Through driving the facial expression and the mouth expression of the current virtual reality character by sound wave information of the voice, the virtual image can automatically generate an expression as natural as a real person, no virtual reality character images need to be made, the cost is low, and time and labor are saved.

Description

technical field [0001] The present invention relates to virtual reality (VR, virtual reality), in particular to a method for generating expressions and gestures of a character model in VR. Background technique [0002] With the development of virtual reality technology, virtual reality devices and matching virtual reality engines have appeared on the market. In the human-computer interaction of virtual reality, the reality of virtual characters will greatly affect the user's experience. Compared with real characters, in order to solve the worries of users, some companies have developed intelligent robots, which can automatically recognize user intentions and answer them. For example, Microsoft’s robot Xiaoice has realized automatic dialogue and communication with users in text. function. Moreover, the robot's text reply can also be converted into a voice stream and corresponding emotional data through TTS' text-to-speech technology. But there is no robot on the market tha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/40G06T13/20
CPCG06T13/205G06T13/40
Inventor 魏建权
Owner 北京五一视界数字孪生科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products