Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-mode adaptive fusion three-dimensional target detection method

A technology of three-dimensional objects and detection methods, applied in three-dimensional object recognition, character and pattern recognition, instruments, etc., can solve the problems of low detection efficiency, achieve the effect of improving efficiency, improving detection effect, and reducing input

Inactive Publication Date: 2019-12-06
NORTHWESTERN POLYTECHNICAL UNIV
View PDF3 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to overcome the shortcomings of low detection efficiency of existing 3D target detection methods, the present invention provides a 3D target detection method based on multi-modal adaptive fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-mode adaptive fusion three-dimensional target detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0012] refer to figure 1 . The specific steps of the three-dimensional target detection method for multi-modal adaptive fusion of the present invention are as follows:

[0013] Step 1. Determine the various information of the input image generated from the data in the KITTI dataset, including the name of the image, the label file of the image, the ground plane equation of the image, point cloud information, and camera calibration information. Read 15 parameters from the file (KITTI dataset format): 2D label coordinates (x1, y1, x2, y2). 3D label coordinates (tx, ty, tz, h, w, l) center point coordinates and length, width and height, delete some labels according to requirements, such as removing pedestrian and cyclist labels when only training cars. Obtain the corresponding ground plane equation (a plane equation: aX+bY+cZ=d), camera calibration parameters include internal and external parameters, and point cloud ([x,....],[y,...],[ z,...]). Create a bird's-eye view image. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-mode adaptive fusion three-dimensional target detection method, which is used for solving the technical problem of low detection efficiency of an existing three-dimensional target detection method. According to the technical scheme, the method comprise: inputting an RGB image and BEV Map; firstly, using an FPN network structure, comprising an encoder structure and adecoder structure, obtaining and using full-resolution feature maps of the FPN network structure and the encoder structure for being combined with bottom-layer detail information and high-layer semantic information, then extracting features corresponding to the two feature maps through feature clipping to be clipped and fused in a self-adaptive mode, and finally selecting 3D suggestions to achieve 3D object detection. The whole process is two-stage detection. In addition, the RGB image and the point cloud are used as original input, LIDAR FV input is reduced, the calculation amount is reduced, the calculation complexity of the algorithm is reduced, and the efficiency of three-dimensional space vehicle target detection is improved. According to the algorithm, the detection effect on smallobjects and the detection rate of shielded vehicles and intercepted vehicles are effectively improved.

Description

technical field [0001] The invention relates to a three-dimensional target detection method, in particular to a multi-mode self-adaptive fusion three-dimensional target detection method. Background technique [0002] Literature "X.Chen, H.Ma, J.Wan, B.Li, and T.Xia,"Multi-view 3d object detectionnetwork for autonomous driving,"in Proc.IEEE Conf.Conference on ComputerVision and Pattern Recognition,2017, pp.1907-1915." proposed a 3D object detection method based on RGB images and LIDAR point cloud information. The method aims to achieve high-precision 3D object detection in autonomous driving scenarios, and proposes a multi-view 3D network, a sensor fusion framework that takes lidar point clouds and RGB images as input and predicts oriented 3D bounding boxes. The network consists of two sub-networks for 3D object proposal generation and multi-view feature fusion. The 3D candidate boxes generated by the region proposal network can effectively represent the 3D point cloud from ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62
CPCG06V20/64G06V20/584G06V10/25G06V2201/07G06V2201/08G06F18/253
Inventor 袁媛王琦刘程堪
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products