Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robot sequence task learning method based on visual simulation

A learning method and robot technology, applied in manipulators, program-controlled manipulators, manufacturing tools, etc., can solve problems such as limited generalization ability and difficulty in applying new unfamiliar scenes, and achieve strong generalization, strong practicability, and damage prevention Effect

Active Publication Date: 2020-05-29
BEIHANG UNIV
View PDF12 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the model trained by this method is difficult to apply to new unfamiliar scenes, and its generalization ability is limited.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot sequence task learning method based on visual simulation
  • Robot sequence task learning method based on visual simulation
  • Robot sequence task learning method based on visual simulation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.

[0071] Such as figure 1 As shown, the present invention proposes a robot sequence task learning method based on visual imitation, and the specific steps are as follows:

[0072] Step 1. The vision sensor is fixed directly above the object, the field of view covers the entire working plane, the robot is located on the side of the working plane, and its working space covers the entire working plane;

[0073] Step 2. Assuming that there are ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a robot sequence task learning method based on visual simulation. The robot sequence task learning method based on the visual simulation is used for guiding a robot to simulateand execute human actions from a video containing the human actions. The robot sequence task learning method comprises the following steps of (1) identifying object types and masks by using a region-based mask convolutional neural network according to an input image; (2) calculating actual plane physical coordinates (x, y) of objects according to the masks; (3) identifying atomic actions in a target video; (4) converting an atomic action sequence and the identified object types into a one-dimensional vector; (5) inputting the one-dimensional vector into a task planner, and outputting a task description vector capable of guiding the robot; and (6) controlling the robot to simulate a sequence task in the target video by combining the task description vector and the object coordinates. According to the robot sequence task learning method based on the visual simulation, the video and the image serve as input, the objects are recognized, a task sequence is deduced, the robot is guided to simulate the target video, and meanwhile, the generalization performance is high, so that the simulation of the task can still be completed under different environments or object types.

Description

Technical field [0001] The invention relates to a method for robots to imitate humans to complete various tasks based on visual sensors and video input, belonging to the field of robot control, deep learning and imitation learning, and is mainly used to teach robots to imitate humans to complete handling, cleaning, classification or Place objects and other application scenarios. Background technique [0002] In recent years, with the rapid development of the field of artificial intelligence and intelligent robots, intelligent products such as robots occupy an increasingly important role in human life, and behind the intelligence are complex algorithms and control methods. In the era of "Industrial Revolution 4.0" and "Made in China 2025", research in the field of robots, robotic arms, and artificial intelligence has increasingly become the mainstream of research and innovation in universities, companies, and major laboratories in various countries. Using artificial intelligence ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): B25J9/16
CPCB25J9/163B25J9/1679B25J9/1697
Inventor 贾之馨林梦香陈智鑫
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products