Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Object pose estimation method based on self-supervised learning and template matching

A technology of template matching and supervised learning, applied in the field of computer vision, can solve problems such as difficulty in application promotion, difficulty in sample labeling, and lack of samples of the true value of the six-degree-of-freedom pose, so as to ensure high efficiency, avoid insufficient samples, and save difficulty and cost effects

Active Publication Date: 2020-03-27
TONGJI UNIV
View PDF3 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method requires a large number of samples marked with the true value of the six-degree-of-freedom pose. The success of the deep learning method depends to a considerable extent on the number of samples and the scope covered by the samples. The success of two-dimensional deep learning target detection lies in the Internet, big data A large number of samples in other fields make two-dimensional labeling easier, but samples with six-degree-of-freedom poses are quite scarce, and sample labeling is also quite difficult, so the application and promotion of methods based on deep learning is difficult

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Object pose estimation method based on self-supervised learning and template matching
  • Object pose estimation method based on self-supervised learning and template matching
  • Object pose estimation method based on self-supervised learning and template matching

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0061] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

[0062] An object pose estimation method based on self-supervised learning and template matching, including:

[0063] S1: Use a calibrated consumer-grade depth camera to collect the color image and depth image of the target object, and the color image and depth image are cropped by a convolutional neural network to obtain the corresponding color candidate image and depth candidate image;

[0064] S2: The color candidate map and the depth candidate map are segmented by a trained self-supervised codec to obtain a color segment map and a depth segment map;

[0065] S3: Use the color segm...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an object pose estimation method based on self-supervised learning and template matching, and the method comprises the steps: S1, collecting a color image and a depth image ofa target object, and obtaining a corresponding color candidate image and a corresponding depth candidate image through cutting; s2, segmenting the color candidate image and the depth candidate imageby a trained self-supervised codec with a noise generator to obtain a color segmentation image and a depth segmentation image; s3, matching the color segmentation map and the depth segmentation map with a template library to obtain a matched pose; s4, finely trimming the matching pose to obtain a target object pose, wherein a color sample image and a depth sample image of the target object three-dimensional model are collected through a spherical multi-scale method; training a self-supervised codec by using the color sample graph; extracting features with pose information of the color sample map and the depth sample map under multiple scales, and constructing a template library according to the features. Compared with the prior art, the method has the advantages of being good in robustness, low in cost, free of label information and the like.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to an object pose estimation method based on self-supervised learning and template matching. Background technique [0002] Object pose estimation technology is based on 3D vision to determine the 3D translation and 3D rotation transformation parameters of the target object relative to the camera, and then estimate the object pose. Object pose estimation is a key issue in the field of robot environment perception, grasping and dexterous operation. The research on this technology is of great significance to promote the promotion of service robots, industrial robot automation, VR and AR technologies. [0003] At present, object pose estimation methods are mainly based on laser point cloud, template matching and deep learning. The above technologies all have certain deficiencies, specifically: [0004] Laser point cloud-based method: collect high-precision point cloud data with high-prec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/73G06T7/593G06N3/04
CPCG06T7/73G06T7/593G06T2207/10024G06T2207/10028G06T2207/20081G06N3/045
Inventor 陈启军王德明颜熠周光亮刘成菊
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products