Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target attitude estimation method and system fusing planar and three-dimensional information, and medium

A technology of three-dimensional information and target posture, applied in the field of image recognition and target posture recognition in video images, can solve the problems of difficult to deal with scene point cloud data noise, slow processing speed, large preparation workload, etc., to reduce the number of templates, reduce The processing speed is slow and the effect of reducing the burden

Pending Publication Date: 2021-11-30
RENMIN UNIVERSITY OF CHINA
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, the target matching algorithm that only uses planar images often needs to prepare multiple templates from multiple angles of the target object, which requires a large amount of pre-preparation work and poor accuracy.
Correspondingly, using the three-dimensional point cloud data obtained by the three-dimensional sensor alone as the descriptor method obtains relatively complete information features, but the processing speed is relatively slow, and it is difficult to cope with the more noise in the scene point cloud data.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target attitude estimation method and system fusing planar and three-dimensional information, and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0024] This embodiment discloses a method for estimating a target pose by fusing plane and three-dimensional information. figure 1 shown, including:

[0025] Obtain the planar grayscale image, depth map and CAD model of the target scene.

[0026] A complete image is composed of three channels: red, green, and blue. The thumbnails of the red, green, and blue channels are displayed in grayscale. Use different gray levels to represent the proportion of different colors in the image. This type of image is usually displayed as a grayscale from the darkest black to the brightest white, and can be converted into a flat grayscale image by means of floating-point arithmetic, integer arithmetic, shift arithmetic, and average arithmetic.

[0027] A depth map can also be called a distance image, which records the distance (depth) from the frame grabber to each point in the scene. It reflects the surface geometry of the scene.

[0028] Segment the target in the planar grayscale image ...

Embodiment 2

[0039] Based on the same inventive concept, this embodiment discloses a target pose estimation system that fuses plane and three-dimensional information, including:

[0040] An image acquisition module, configured to acquire a plane grayscale image, a depth map and a CAD model under the target scene;

[0041] The model training module is used to segment the planar grayscale image and map the result to the depth map;

[0042] The point cloud positioning module is used to map the depth map into the three-dimensional point cloud, and locate the target in the three-dimensional point cloud;

[0043] The attitude estimation module is used to determine the CAD model corresponding to the target according to the boundary area of ​​the target object, and map the positioning results in the three-dimensional point cloud to the CAD model corresponding to the target, thereby estimating the attitude of the target.

Embodiment 3

[0045] Based on the same inventive concept, this embodiment discloses a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to achieve the goal of fusing plane and three-dimensional information in any of the above items. The steps of the pose estimation method.

[0046] Those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of image recognition, and relates to a target attitude estimation method and system fusing plane and three-dimensional information and a medium, and the method comprises the steps: obtaining a plane grayscale image, a depth map and a CAD model under a target scene; segmenting a target in the plane grayscale image, and enabling a result to correspond to the depth map; mapping the depth map into a three-dimensional point cloud, and positioning a target in the three-dimensional point cloud; and mapping a positioning result in the three-dimensional point cloud to a CAD model corresponding to the target, thereby estimating the attitude of the target. According to the method, plane information and three-dimensional information can be fused, and the target attitude can be estimated from multiple angles.

Description

technical field [0001] The invention relates to a method, a system and a medium for estimating a target pose by integrating plane and three-dimensional information, belonging to the technical field of image recognition, in particular to the technical field of target pose recognition in video images. Background technique [0002] Due to the popularity of two-dimensional plane image acquisition equipment, the acquisition of plane images is becoming more and more convenient. The plane image can quickly calculate the edge information of the object through the change of the color gradient, thus helping to realize the separation of the target and the application scene. At the same time, as 3D sensors become more and more widely used, it is becoming more and more common to use 3D stereoscopic point clouds to solve relevant problems in the target scene. As an important information carrier for product design and manufacturing in the industry, the professional and exquisite three-dim...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06K9/46G06K9/62
CPCG06T7/73G06T2207/20081G06T2207/20084G06T2207/10028G06F18/213G06F18/24G06F18/25
Inventor 何军孙琪蒋思为何钰霖
Owner RENMIN UNIVERSITY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products