Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method, system and robot for generating interactive content of robot

A technology of interactive content and generation system, applied in the field of robot interaction and robot interaction content generation, it can solve the problems of poor robot intelligence and robots cannot be more anthropomorphic, so as to improve intelligence, improve human-computer interaction experience, and improve anthropomorphism. Effect

Inactive Publication Date: 2017-02-22
SHENZHEN GOWILD ROBOTICS CO LTD
View PDF4 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

As far as robots are concerned, at present, we want robots to give facial expression feedback, which is mainly obtained through pre-designed methods and deep learning training corpus. This kind of facial expression feedback trained through pre-designed programs and corpus has the following disadvantages: The output of emoticons depends on the text representation of humans, that is, similar to a question-and-answer machine, different words of the user trigger different emoticons. In this case, the robot actually outputs emoticons according to the pre-designed interaction method of humans, which leads to the robot It can’t be more anthropomorphic, and can’t give expression feedback based on the number of interactions between people, interaction behavior, intimacy, etc. like humans, so the generation of expressions requires a lot of human-computer interaction, resulting in poor intelligence of the robot

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method, system and robot for generating interactive content of robot
  • A method, system and robot for generating interactive content of robot

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] Such as figure 1 As shown, this embodiment discloses a method for generating robot interaction content, including:

[0044] S101. Obtain a multi-modal signal;

[0045] S102: Determine user intention according to the multi-modal signal;

[0046] S103: According to the multi-modal signal and the user's intention, generate robot interaction content in combination with current robot variable parameters.

[0047] For application scenarios, existing robots are generally based on a method for generating interactive content of a question-and-answer interactive robot in a fixed scene, and it is impossible to generate the expression of the robot more accurately based on the current scene. A method for generating interactive content of a robot of the present invention includes: acquiring a multi-modal signal; determining a user's intention according to the multi-modal signal; according to the multi-modal signal and the user's intention, combining current robot variable parameters Generat...

Embodiment 2

[0066] Such as figure 2 As shown, this embodiment discloses a system for generating robot interactive content of the present invention, which is characterized in that it includes:

[0067] The obtaining module 201 is used to obtain multi-modal signals;

[0068] The intention recognition module 202 is configured to determine the user's intention according to the multi-modal signal;

[0069] The content generation module 203 is configured to generate robot interaction content according to the multi-modal signal and the user's intention in combination with current robot variable parameters.

[0070] In this way, it is possible to more accurately generate robot interaction content based on multi-modal signals such as image signals and voice signals in combination with the variable parameters of the robot, so as to interact and communicate with people more accurately and anthropomorphically. Variable parameters are: parameters that the user actively controls during human-computer interac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a method for generating a robot interactive content, comprising: acquiring a multimode signal; determining a user's intention based on the multimode signal; combining the multidimensional signal and the user's intention, in combination with the current robot variable parameters generate robot interactive content. The invention adds the variable parameter of the robot to the interactive content generation of the robot so that the robot can generate the interactive content according to the previous variable parameter, so that the robot can be more anthropomorphic when interacting with the human being so that the robot can live shaft has a human way of life; the method can enhance the robot interactive content generated anthropomorphic, enhance the human-computer interaction experience and improve intelligence.

Description

Technical field [0001] The present invention relates to the technical field of robot interaction, in particular to a method, system and robot for generating robot interaction content. Background technique [0002] Humans usually make an expression in the process of interacting, usually after seeing the sound with the eyes or hearing the sound with the ears, the brain analyzes and then gives reasonable expression feedback. People come to life scenes on the time axis of a certain day, such as eating, Sleep, exercise, etc., changes in various scene values ​​will affect the feedback of human expressions. As for the robot, the current expression feedback that the robot wants to make is mainly obtained through pre-designed methods and deep learning training corpus. This expression feedback through pre-designed program and corpus training has the following disadvantages: The output of expressions relies on human textual representations, that is, similar to a question-and-answer machine...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F3/01
CPCG06F3/01G06F3/011G06F3/017
Inventor 杨新宇王昊奋邱楠
Owner SHENZHEN GOWILD ROBOTICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products