Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Intelligent vehicle target real-time detection and positioning method based on bionic vision

A real-time detection, smart car technology, applied in image data processing, instrument, character and pattern recognition, etc., can solve the problem of unable to reduce the amount of visual data calculation in unmanned driving, unable to build semantic SLAM online in real time, etc., to achieve positioning accuracy High, fast processing speed, and the effect of increasing the observation angle

Active Publication Date: 2020-11-10
廊坊和易生活网络科技股份有限公司
View PDF11 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In view of the above-mentioned shortcomings and deficiencies of the prior art, the present invention provides a method for real-time detection and positioning of intelligent vehicle targets based on bionic vision, which solves the problem that the prior art cannot reduce the amount of calculation of visual data in unmanned driving, so that online Technical Issues in Building Semantic SLAM in Real Time

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Intelligent vehicle target real-time detection and positioning method based on bionic vision
  • Intelligent vehicle target real-time detection and positioning method based on bionic vision
  • Intelligent vehicle target real-time detection and positioning method based on bionic vision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0074] like figure 1 As shown, the present embodiment provides a method for real-time detection and positioning of smart car targets based on bionic vision, which specifically includes the following steps:

[0075] Step S1, acquiring images of lenses with different focal lengths in the multi-eye imaging device of the smart car in real time;

[0076] Step S2, detecting the category and position information of each object in the image of each shot.

[0077] For example, the YOLOv5 target real-time detection algorithm can be used to detect the category and position information of each target in the image of each shot.

[0078] The targets of this embodiment may include: traffic lights, speed limit signs, pedestrians, small animals, vehicles or lane markings, etc.; this embodiment is only for illustration and not limitation, and is determined according to actual images.

[0079] The above category and position information includes: position information, size information and cate...

Embodiment 2

[0120] The embodiment of the present invention faces the engineering requirements of unmanned driving on the embedded platform for real-time detection and positioning, and provides a smart car target based on bionic vision in order to greatly improve the target detection and positioning efficiency of the unmanned visual information processing system. Real-time detection and localization methods, such as Figure 9 shown.

[0121] The method of the embodiment of the present invention simulates the insect compound eye imaging system to form a multi-eye imaging system to increase the field of view, and uses the YOLOv5 deep learning target detection method to classify and identify multiple targets, and segment the targets of interest in the entire image one by one. Then, the overlapping and non-overlapping areas of the sub-fields of view of the multi-eye imaging system are divided into common-view and non-common-view areas, and data coordinate conversion is performed on the common-...

Embodiment 3

[0163] According to another aspect of the embodiment of the present invention, the embodiment of the present invention also provides an intelligent vehicle driving system, the intelligent driving system includes: a control device and a multi-eye imaging device connected to the control device, and the multi-eye imaging device includes: At least one short focal length lens, at least one long focal length lens;

[0164] After the control device receives the images collected by the short focal length lens and the long focal length lens, it uses the real-time detection and positioning method of the smart car described in any one of the first or second embodiment above to build a three-dimensional semantic map of the smart car online in real time.

[0165] In practical applications, the short-focus lens of the multi-eye imaging device in this embodiment is an 8cm focal length lens and a 12cm focal length lens, and the telephoto lens is a 16cm focal length lens and a 25cm focal length...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an intelligent vehicle target real-time detection and positioning method based on bionic vision. The method comprises the steps of acquiring images of lenses with different focal lengths in a multi-view imaging device of an intelligent vehicle in real time; detecting the category and position information of each target in each image; dividing a reference image into a common-view area and a non-common-view area according to an object space homogeneous coordinate system transformation relationship of each lens; for the common-view area of each image, performing three-dimensional reconstruction positioning on a target in the common-view area by adopting a binocular different-focal-length stereoscopic vision reconstruction method to obtain three-dimensional positioninginformation and category label semantic information of the target; and for the non-common-view areas of the reference image, obtaining angle positioning information of a target in each non-common-view area so as to construct a vector map with semantic information for the intelligent vehicle. The method solves the technical problems that in the prior art, a three-dimensional semantic map cannot beconstructed on line in real time, and the calculation amount and the calculation complexity of visual point cloud data in unmanned driving cannot be reduced.

Description

technical field [0001] The invention relates to the technical field of visual detection and positioning, in particular to a method for real-time detection and positioning of intelligent vehicle targets based on bionic vision. Background technique [0002] In recent years, with the deepening of human research on biomimetic technology, artificial intelligence technology is developing at an unprecedented speed and achieving breakthroughs. Building bionic vision sensors and efficient computer vision solutions by simulating the human visual system and insect compound eye imaging mechanism, and realizing real-time detection and positioning of targets is the current research hotspot of bionic vision artificial intelligence technology. Unlike lidar, which can only realize distance perception, vision sensors can also recognize complex semantic information such as traffic signs, traffic lights, and lane lines. Therefore, the research on unmanned vehicle technology based on bionic visi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06T7/70
CPCG06T7/70G06T2207/10004G06T2207/30248G06T2207/20081G06T2207/20084G06V20/56G06V2201/07
Inventor 安成刚张立国李巍李会祥吴程飞张志强王增志张旗史明亮
Owner 廊坊和易生活网络科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products