Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A visual imitation-based learning method for robot sequence tasks

A learning method and robot technology, applied in the field of deep learning and imitation learning, and robot control, can solve problems such as difficulty in applying new unfamiliar scenes and limited generalization ability, and achieve strong practicability, strong generalization, and damage prevention Effect

Active Publication Date: 2021-10-01
BEIHANG UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the model trained by this method is difficult to apply to new unfamiliar scenes, and its generalization ability is limited.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A visual imitation-based learning method for robot sequence tasks
  • A visual imitation-based learning method for robot sequence tasks
  • A visual imitation-based learning method for robot sequence tasks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0071] Such as figure 1 As shown, the present invention proposes a robot sequence task learning method based on visual imitation, and the specific steps are as follows:

[0072] Step 1. The vision sensor is fixed directly above the object, and the field of view covers the entire working plane. The robot is located on the side of the working plane, and its working space covers the entire working plane;

[0073] Step 2. Assuming that there are n obj...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A visual imitation-based robotic sequence task learning method for instructing robots to imitate human actions from videos containing them. The steps are: (1) According to the input image, use the region-based mask convolutional neural network to identify the object type and mask; (2) Calculate the actual plane physical coordinates (x, y) of the object according to the mask; (3) Identify the target Atomic actions in the video; (4) convert the atomic action sequence and the recognized object type into a one-dimensional vector; (5) input the one-dimensional vector into the task planner, and output a task description vector that can guide the robot; (6) Combining the task description vector and object coordinates, the robot is controlled to complete the robot's imitation of the sequence tasks in the target video. The invention takes videos and images as input, recognizes objects and infers task sequences, and guides the robot to complete the imitation of the target video. At the same time, it has strong generalization and can still complete the imitation tasks under different environments or object types.

Description

technical field [0001] The invention relates to a method for robots to imitate humans to complete various tasks based on visual sensors and video input, which belongs to the field of robot control, deep learning and imitation learning, and is mainly used in teaching robots to imitate humans to complete handling, cleaning, classification or Application scenarios such as placing objects. Background technique [0002] In recent years, with the rapid development of the fields of artificial intelligence and intelligent robots, intelligent products such as robots have played an increasingly important role in human life, and behind the intelligence are complex algorithms and control methods. Under the background of "Industrial Revolution 4.0" and "Made in China 2025", the research on robots, robotic arms, etc. and artificial intelligence has become the mainstream of research and innovation in universities, enterprises and laboratories in various countries. Using artificial intelli...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): B25J9/16
CPCB25J9/163B25J9/1679B25J9/1697
Inventor 贾之馨林梦香陈智鑫
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products