Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual sample deep learning-based robot target identification and pose reconstruction method

A deep learning and virtual sample technology, applied in character and pattern recognition, instruments, computer components, etc., can solve problems such as difficult to achieve accurate matching, feature occlusion, and inability to meet the real-time requirements of industrial control, so as to improve the flexibility of the algorithm Effect

Active Publication Date: 2017-06-13
SHANGHAI GOLYTEC AUTOMATION CO LTD
View PDF8 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] At present, the visual perception of industrial robots mainly uses the key contour features on certain planes of the target workpiece to reconstruct the target pose and plan the operation path. The disadvantages include: when the deviation of the viewing angle of the object is too large, there will be problems of feature occlusion and difficult matching. ; The software is not flexible enough. For different operation targets, different contour features and corresponding pose inversion formulas need to be defined in advance.
[0004] However, the above method does not have the initiative of viewing angle selection. If the current camera pose corresponds to less contour information of the target image, it is difficult to achieve accurate matching; and the traditional template matching method cannot adapt to the contour noise introduced by the complex background in the actual industrial scene. Moreover, the template search and nonlinear optimization process will take a lot of time, which cannot meet the real-time requirements of industrial control

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual sample deep learning-based robot target identification and pose reconstruction method
  • Virtual sample deep learning-based robot target identification and pose reconstruction method
  • Virtual sample deep learning-based robot target identification and pose reconstruction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several changes and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0029] According to the robot target recognition and pose reconstruction method based on virtual sample deep learning provided by the present invention, the method comprises the following steps:

[0030] Target area detection step: use the CNN area detector to extract the area of ​​the operation target in the camera image, so as to initially determine the relative position of the target and the camera at the end of the robot;

[0031] Relative attitude estimation step: use CNN attitude classifier to est...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a virtual sample deep learning-based robot target identification and pose reconstruction method. The method comprises the steps of extracting a region of an operation target in a camera image by adopting a CNN region detector, and preliminarily determining relative positions of the operation target and a robot end camera; estimating an observation angle deviation of a current view angle and an accurate pose solving optimal view angle of the computer end camera by adopting a CNN pose classifier; controlling a robot motion by adopting a multi-observation view angle correction method to enable the end camera to be transferred to the accurate pose solving optimal view angle; and by adopting virtual-real matching of contour features and pose inverse solving at the optimal view angle, realizing accurate calculation of a target pose. According to the method, the problem in massive sample demands of a deep convolutional neural network is solved, and the problems of feature shielding and matching difficulty caused by an excessively large contour matching view angle deviation are solved; and the activeness of robot vision perception and the algorithm flexibility of target pose reconstruction are improved.

Description

technical field [0001] The present invention relates to the fields of machine vision and robot control, in particular to a robot target recognition and pose reconstruction method based on deep learning of virtual samples. Background technique [0002] At present, the visual perception of industrial robots mainly uses the key contour features on certain planes of the target workpiece to reconstruct the target pose and plan the operation path. The disadvantages include: when the deviation of the viewing angle of the object is too large, there will be problems of feature occlusion and difficult matching. ; The software is not flexible enough. For different operation targets, different contour features and corresponding pose inversion formulas need to be defined in advance. [0003] After searching: Wang Zhongren et al. proposed to use CAD model for template training in "CAD model-based random workpiece visual recognition and positioning method", and then use the template outlin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06T7/33G06T17/00
CPCG06T17/00G06T2207/10004G06T2207/20084G06T2207/20081G06T2207/30244G06F18/22G06F18/214G06F18/24
Inventor 谷朝臣章良君吴开杰关新平
Owner SHANGHAI GOLYTEC AUTOMATION CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products