Laser monocular vision fusion positioning mapping method in dynamic scene

A monocular vision and dynamic scene technology, applied in image analysis, image data processing, measuring devices, etc., can solve the problem of robot map construction, immaturity, and the theory needs to be further improved, so as to reduce the amount of calculation and increase reliability , Accurate effect of dynamic obstacle motion estimation

Active Publication Date: 2021-09-03
HUNAN UNIV +1
View PDF5 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, the robot will face three problems in an unknown environment: 1) where am I? 2) What is around me? 3) Where am I going? These three problems correspond to: 1) autonomous positioning of the robot; 2) map construction of the robot; 3) real-time obstacle avoidance and path planning of the robot
At present, most of the research on visual SLAM problems is based on the assumption that the surrounding environment is static, and the laser monocular vision fusion positioning and mapping method in dynamic scenes is still immature, and the relevant theories need to be further improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Laser monocular vision fusion positioning mapping method in dynamic scene
  • Laser monocular vision fusion positioning mapping method in dynamic scene
  • Laser monocular vision fusion positioning mapping method in dynamic scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0092] The present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.

[0093] The present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.

[0094] Such as figure 1As shown, the automatic driving hardware involved in the present invention includes a laser radar 1 , a monocular vision 2 and a computing unit 3 . Among them, lidar 1 is used to detect dynamic obstacle point cloud data, and provides depth prior information at an absolute scale for monocular vision 2, which corresponds to the radar coordinate system below. As the main sensor of the positioning and mapping system, monocular vision 2 is used to collect image data of dynamic scenes. It can be combined with point cloud data for positioning and mapping in dynamic scenes, which corresponds to the camera coordinate system below. The calculation unit 3 is used to detect obstacles and estimate states by using ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a laser monocular vision fusion positioning mapping method in a dynamic scene, and the method comprises the steps: 1, detecting dynamic obstacle information according to point cloud data of a current frame and pose priori information predicted by a visual odometer of a previous frame; 2, forming an image mask on monocular vision 2 with mutually calibrated external parameters, extracting ORB feature points on the image mask and matching the ORB feature points with ORB feature points of a previous frame, estimating the depth of the ORB feature points, obtaining a relative pose through pose calculation, and outputting key frame information meeting requirements; 3, inserting the key frame into the common view, updating the connection relation between the key frame and other key frames, the growth tree and the bag-of-word model according to the common view degree of map points, generating new map points, finding the adjacent key frame according to the essential map of the current key frame, constructing a nonlinear optimization problem, and optimizing the pose and the map points of the key frame; 4, judging whether the similarity between the image data of each key frame and the image data of the current key frame reaches a threshold value or not, if yes, judging that loopback occurs, replacing or filling map points where conflicts exist between the current key frame and the loopback key frame, then connecting the current key frame and the loopback key frame on the essential map, updating the essential map, and finally, carrying out global BA to obtain optimized key frame poses, feature point maps and point cloud maps. According to the invention, the SLAM purpose can be executed in a dynamic scene.

Description

technical field [0001] The invention relates to a laser monocular vision fusion positioning and mapping method for the field of automatic driving, in particular to a laser monocular vision fusion positioning and mapping method in a dynamic scene. Background technique [0002] With the rapid development of technologies such as automation, artificial intelligence and robotics, more and more intelligent unmanned systems with perception, positioning and navigation functions are applied in real life and industrial scenarios, such as sweeping robots, storage robots and driverless taxis car etc. With the continuous iteration of sensor technology and intelligent algorithms, the application scenarios of these unmanned systems have expanded from simple known environments to completely unknown complex environments. In the unknown environment, the robot needs to use the information from the sensor to perceive the surrounding environment and estimate its own state, so as to ensure that ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06T7/66G06K9/46G06K9/62G01C22/00
CPCG06T7/73G06T7/66G01C22/00G06F18/2321G06F18/23213
Inventor 秦晓辉芦涛尚敬胡云卿刘海涛徐彪谢国涛秦兆博胡满江王晓伟边有钢秦洪懋丁荣军
Owner HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products