Lane detection method and system based on multi-level fusion of vision and lidar

A technology for lidar and lane detection, which is used in radio wave measurement systems, measurement devices, and re-radiation of electromagnetic waves.

Active Publication Date: 2020-09-18
TSINGHUA UNIV
View PDF12 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to overcome the deficiencies of the prior art, and propose a lane detection method based on the multi-level fusion of vision and laser radar, which combines the laser radar point cloud and camera image for lane detection, and uses the point cloud as image space information It uses images to make up for the defect of low sampling density of point clouds, and improves the robustness of lane detection algorithms in complex road scenes such as uphill lanes, uneven lighting, heavy fog, and night

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lane detection method and system based on multi-level fusion of vision and lidar
  • Lane detection method and system based on multi-level fusion of vision and lidar
  • Lane detection method and system based on multi-level fusion of vision and lidar

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 2

[0086] Embodiment 2 of the present invention proposes a lane detection system based on multi-level fusion of vision and lidar, the system includes: lidar, vehicle camera and lane detection module; the lane detection module includes: semantic segmentation network 3D-LaneNet, A marking unit, a first lane candidate area detection unit, a second lane candidate area detection unit and a lane fusion unit;

[0087] Lidar is used to obtain point cloud data;

[0088] The on-board camera is used to obtain video images;

[0089] A calibration unit is used to calibrate the obtained point cloud data and video images;

[0090] The first lane candidate area detection unit is used to fuse the height information of the point cloud data, the reflection intensity information and the RGB information of the video image to construct a point cloud clustering model, obtain the lane point cloud based on the point cloud clustering model, and carry out the lane point cloud The least squares method is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a lane detection method and system based on multi-level fusion of vision and laser radar. The method is realized by installing a laser radar and a vehicle-mounted camera on a vehicle. The method includes: calibrating the obtained point cloud data and video images; The point cloud clustering model is constructed by fusing the height information, reflection intensity information and RGB information of the point cloud data, and the lane point cloud is obtained based on the point cloud clustering model, and the lane surface is obtained by least squares fitting of the lane point cloud. The first lane candidate area; the reflection intensity information in the point cloud data is fused with the RGB information of the video image to obtain four-channel road information; input the pre-trained semantic segmentation network 3D‑LaneNet, and output the image of the second lane candidate area ; Fuse the first lane candidate area and the second lane candidate area, and use the union of the two lane candidate areas as the final lane area. The method of the invention improves the accuracy of lane detection in complex road scenes.

Description

technical field [0001] The invention relates to the technical field of automatic driving, in particular to a lane detection method and system based on multi-level fusion of vision and laser radar. Background technique [0002] Lane detection in road scenes is a key technical link to realize automatic driving of vehicles to ensure that vehicles drive within the lane limits and avoid collisions with targets such as pedestrians outside the lane due to crossing the lane. And the subsequent detection of lane lines in the effective lane area will be faster and more accurate, so that the vehicle can safely and automatically drive in the correct lane. [0003] It is relatively easy for humans to identify lanes on the road, but in complex scenes such as strong light, heavy fog, and night, human lane identification capabilities are still limited. In order to realize automatic driving, it is necessary to realize accurate detection of lanes in complex scenes. Most existing lane detect...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G01S17/87G01S17/93G01S7/48G06V10/56G06V10/764
CPCG01S17/87G01S7/4802G01S7/4808G06V20/588G06F18/25G01S17/86G01S17/931G06V10/454G06V10/56G06V10/82G06V10/764G06V10/806G06F18/2413G06F18/253G06F18/23G06F18/214G06F18/251
Inventor 张新钰李志伟刘华平李骏李太帆周沫谭启凡
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products