Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual recognition and positioning method for robot intelligent capture application

A technology of robot intelligence and visual recognition, applied in the field of intelligent robots, can solve the problems of poor robust performance, large amount of calculation, slow detection speed, etc.

Active Publication Date: 2018-06-15
合肥哈工慧拣智能科技有限公司
View PDF4 Cites 210 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method needs to generate thousands of candidate regions for each image when dividing the region, and input each candidate region into the convolutional neural network for detection, which has a large amount of calculation and slow detection speed, and is not suitable for fields with high real-time performance requirements.
Moreover, this method can only obtain the grasping candidate area of ​​the target, and cannot determine the three-dimensional pose of the target. Therefore, it is difficult to plan the best grasping method for randomly placed targets according to their different poses.
[0006] In short, in the existing robot visual grasping technology, the identification and positioning are often divided into two steps in the detection process. The overall level of intelligence is not high, the robustness is not good, and it is difficult to control each other due to the mutual check and balance between detection accuracy and speed. up to application standards
At the same time, these detection methods are mostly used under the conditions of regular placement of items and a single grasping strategy. For randomly placed goods, their posture information is not well detected, which is not conducive to the planning of grasping methods and the improvement of the success rate of grasping.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual recognition and positioning method for robot intelligent capture application
  • Visual recognition and positioning method for robot intelligent capture application
  • Visual recognition and positioning method for robot intelligent capture application

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034]The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments, wherein the schematic embodiments and descriptions are only used to explain the present invention, but are not intended to limit the present invention.

[0035] First of all, the present invention constructs and trains a corresponding deep convolutional neural network for the visual recognition and positioning objects of the robot, specifically including the steps of building a deep learning data set, building a deep convolutional neural network, and offline training of a deep convolutional neural network. , detailed as follows:

[0036] (A) Deep learning dataset construction steps: Collect sample images in the corresponding scene according to the detection objects and task requirements, and manually label the sample images with the help of open source tools. The label information includes the category of the target object in the scene and its co...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a visual recognition and positioning method for robot intelligent capture application. According to the method, an RGB-D scene image is collected, a supervised and trained deep convolutional neural network is utilized to recognize the category of a target contained in a color image and a corresponding position region, the pose state of the target is analyzed in combinationwith a deep image, pose information needed by a controller is obtained through coordinate transformation, and visual recognition and positioning are completed. Through the method, the double functions of recognition and positioning can be achieved just through a single visual sensor, the existing target detection process is simplified, and application cost is saved. Meanwhile, a deep convolutional neural network is adopted to obtain image features through learning, the method has high robustness on multiple kinds of environment interference such as target random placement, image viewing anglechanging and illumination background interference, and recognition and positioning accuracy under complicated working conditions is improved. Besides, through the positioning method, exact pose information can be further obtained on the basis of determining object spatial position distribution, and strategy planning of intelligent capture is promoted.

Description

【Technical field】 [0001] The invention belongs to the field of intelligent robots, and in particular relates to a visual recognition and positioning method for intelligent grasping applications of robots. 【Background technique】 [0002] In the intelligent logistics warehousing system, the mobile operation robot with intelligent grasping ability is an important carrier to realize efficient unmanned operation. According to the order requirements, the robot navigates autonomously in the warehouse, grabs the target products on the shelves, and realizes unmanned material sorting. In the process of commodity grabbing, the correct identification and precise positioning of the target commodity by the robot's visual system is a prerequisite for successful grabbing. Only by providing accurate visual perception signals for the robot's motion control in time can the successful completion of the grabbing task be guaranteed. [0003] Most of the visual recognition schemes for robot grasp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/70G06T7/80G06N3/04G06N3/08
CPCG06N3/084G06T7/70G06T7/80G06N3/045
Inventor 丁亮程栋梁周如意刘振王亚运蒋鸣鹤于振中
Owner 合肥哈工慧拣智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products