Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

A technology of obstacle detection and 3D point cloud, applied in image analysis, image data processing, instruments, etc., can solve problems such as low accuracy, impossibility of direct application, complex application environment, etc., and achieve high reliability and practicability Effect

Inactive Publication Date: 2014-07-30
GUILIN UNIV OF ELECTRONIC TECH +1
View PDF3 Cites 107 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Typically applications such as autonomous mobile robots and autonomous driving have complex environments affected by optical distortion and noise, specular reflections from smooth surfaces, projection reduction, perspective distortion, low texture, repetitive texture, transparent objects, and overlapping and non-contiguous areas, and cannot be guaranteed Dense disparity maps can be obtained through stereo matching calculations
In addition, in a complex road environment, only relying on road color experience or road edge detection is not accurate enough to detect roads in a single image, and cannot be directly applied to actual situations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
  • Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
  • Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028]An automatic obstacle detection method based on 3D point cloud segmentation and fusion of color information, such as figure 1 shown, including the following steps:

[0029] Step 1: Obtain two color images through two cameras at different positions, use the stereo calibration method to calibrate the binocular camera, calculate the internal and external parameters and relative positional relationship of the two cameras, and eliminate the distortion of the two cameras according to these parameters Align with the row (or column) so that the imaging origin coordinates of the two color images are consistent, and a corrected binocular color view is obtained. The pitch angle and height of the camera relative to the road surface are acquired or predetermined by the sensor. The relative position and focal length of the two cameras are fixed, that is, once calibrated, the relative position and focal length of the two cameras will not be changed. The pitch angle and height of the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a binocular vision obstacle detection method based on three-dimensional point cloud segmentation. The method comprises the steps of synchronously collecting two camera images of the same specification, conducting calibration and correction on a binocular camera, and calculating a three-dimensional point cloud segmentation threshold value; using a three-dimensional matching algorithm and three-dimensional reconstruction calculation for obtaining a three-dimensional point cloud, and conducting image segmentation on a reference map to obtain image blocks; automatically detecting the height of a road surface of the three-dimensional point cloud, and utilizing the three-dimensional point cloud segmentation threshold value for conducting segmentation to obtain a road surface point cloud, obstacle point clouds at different positions and unknown region point clouds; utilizing the point clouds obtained through segmentation for being combined with the segmented image blocks, determining the correctness of obstacles and the road surface, and determining position ranges of the obstacles, the road surface and unknown regions. According to the binocular vision obstacle detection method, the camera and the height of the road surface can be still detected under the complex environment, the three-dimensional segmentation threshold value is automatically estimated, the obstacle point clouds, the road surface point cloud and the unknown region point clouds can be obtained through segmentation, the color image segmentation technology is ended, color information is integrated, correctness of the obstacles and the road surface is determined, the position ranges of the obstacles, the road surface and the unknown regions are determined, the high-robustness obstacle detection is achieved, and the binocular vision obstacle detection method has higher reliability and practicability.

Description

technical field [0001] The invention relates to the field of automatic detection of obstacles based on binocular stereo vision, such as autonomous mobile robots and automatic driving, in particular to a binocular vision obstacle detection method based on three-dimensional point cloud segmentation. Background technique [0002] Binocular stereo vision is an important branch of computer vision. This vision directly simulates the way human eyes process scenes. It is simple and reliable, and has great application value in many fields, such as robot navigation and aerial survey, three-dimensional measurement, intelligent transportation and virtual reality. Wait. Binocular stereo vision is to shoot the same scene by two or one camera in different positions after moving or rotating, and obtain the three-dimensional coordinate value of the point by calculating the parallax of the spatial point in the two images. In the research of autonomous mobile robots and autonomous driving, th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06T17/00
Inventor 袁华曾日金莫建文陈利霞张彤首照宇欧阳宁赵晖
Owner GUILIN UNIV OF ELECTRONIC TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products