Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction

A technology of 3D object and 3D reconstruction, applied in the field of image processing and computer vision, to achieve the effect of improving detection performance, good scalability, and realizing 3D detection tasks

Inactive Publication Date: 2020-01-14
DALIAN UNIV OF TECH
View PDF6 Cites 44 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention aims to overcome the deficiencies of the prior art, provides a more accurate 3D object detection method based on a monocular camera, solves the problem of reconstructing 3D space, and can extract 3D semantics well. An independent module converts the input data from a two-dimensional image plane to a three-dimensional point cloud space to obtain a better input representation; in order to improve the recognition ability of point clouds, this invention proposes a multi-modal feature fusion module that complements RGB features Embedded into the generated point cloud representation; then 3D detection using the PointNet network to obtain the 3D position, size and orientation of the object

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
  • Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
  • Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The specific implementation manners of the present invention will be further described below in conjunction with the accompanying drawings and technical solutions.

[0051] The present invention uses the pictures acquired by the monocular camera as the sensor as the data, and on this basis, uses the two-dimensional detector and the sparse depth map inferred by CNN's depth prediction and feature method to restore the depth information and establish three-dimensional point cloud data. The implementation process of the whole method is as follows figure 1 As shown, the method includes the following steps:

[0052] 1) First, two CNN networks are used to convolve the RGB image to obtain the approximate position and depth information of the object.

[0053] 1-1) Two-dimensional detector, use the CNN two-dimensional detector to detect and determine the object in the RGB image, and output the score (Class Score) of the detected object category and the coordinates of the two-dim...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction, and belongs to the field of image processing and computer vision. The method comprises the following steps: firstly, converting input data from a two-dimensional image plane into a three-dimensional point cloud space by utilizing an independent module so as to obtain better input representation; performing three-dimensional detection by using a PointNet network as a backbone network to obtain a three-dimensional position, a three-dimensional size and a three-dimensional direction of the object. In order to improve the recognition capability of the point cloud, the invention provides a multi-modal feature fusion module, and RGB information of points and RGBfeatures of ROI are supplemented and embedded into the generated point cloud representation. Compared with a two-dimensional image, the method for deriving the three-dimensional bounding box from thethree-dimensional scene is more efficient; compared with a similar three-dimensional object detection method based on a monocular camera, the method provided by the invention is more efficient.

Description

technical field [0001] The invention belongs to the field of image processing and computer vision, and relates to three-dimensional target detection based on monocular images in outdoor scenes. It specifically relates to a monocular image-based 3D reconstruction-based 3D object detection method, which takes a monocular image as input and outputs the real 3D coordinates, size, and orientation of an object of interest (such as a vehicle, pedestrian, etc.) in the image. A 3D detection method with such information. Background technique [0002] In recent years, with the development of deep learning and computer vision, a large number of two-dimensional object detection algorithms have been proposed and widely used in various visual products. However, for applications such as unmanned driving, mobile robots and virtual reality, two-dimensional detection technology is far from meeting the actual needs. In order to provide more accurate target position and geometric information, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32G06K9/46G06T7/00G06T7/62
CPCG06T7/0002G06T7/62G06T2207/10028G06T2207/20081G06T2207/20084G06V10/25G06V10/44G06V10/56
Inventor 李豪杰王智慧马新柱欧阳万里方欣瑞
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products