Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation

A semantic segmentation and semantic mapping technology, applied in the field of computer vision, can solve problems such as reducing efficiency and weakening system performance, and achieve the effects of improving efficiency, improving system performance, and strong robustness

Active Publication Date: 2019-08-06
SOUTHEAST UNIV
View PDF4 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with direct semantic perception of 3D point clouds, this approach reduces efficiency and weakens system performance to a certain extent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
  • Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
  • Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

[0040] The present invention provides a semantic mapping system based on real-time positioning mapping and three-dimensional semantic segmentation, and uses a sparse mapping system based on feature points to extract key frames and camera poses. For the key frame, first use the mature two-dimensional object detection method to extract the region of interest, and then use the inter-frame information, that is, the camera pose, and the spatial information, that is, the image depth, to obtain candidate frustums. The frustum is segmented by point cloud semantic segmentation method, and a Bayesian update scheme is designed to fuse the segmentation results of different frames. The present invention aims to make full use of inter-frame information and spatial information to improve system performance.

[0041] The following is based on Ubuntu16.04 and N...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a novel semantic mapping system based on instant positioning and mapping and three-dimensional point cloud semantic segmentation, and belongs to the technical field of computervision and artificial intelligence. According to the method, a sparse map is established by utilizing instant positioning and mapping, key frames and camera poses are obtained, and semantic segmentation is carried out based on the key frames by utilizing point cloud semantic segmentation. A two-dimensional target detection method and point cloud splicing are utilized to obtain a truncated body suggestion, a Bayesian updating scheme is designed to integrate semantic tags of candidate truncated bodies, and points with final correction tags are inserted into an established sparse map. Experiments show that the system has high efficiency and accuracy.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a semantic mapping system based on real-time location mapping and three-dimensional semantic segmentation. Background technique [0002] Service robots generally consist of three modules: human-computer interaction, environmental perception, and motion control. To perceive the surrounding environment, a robot needs a stable and powerful sensor system to act as an "eye", and at the same time requires corresponding algorithms and a powerful processing unit to understand objects. Among them, the visual sensor is an indispensable part. Compared with lidar and millimeter-wave radar, the resolution of the camera is higher, and it can obtain enough environmental details, such as describing the appearance and shape of objects, reading signs, etc. Although the Global Positioning System (Global Positioning System, GPS) is helpful to the positioning process, the interference caused...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/10G06T7/73G06T7/66G06K9/46G06K9/62
CPCG06T7/10G06T7/73G06T7/66G06T2207/10016G06T2207/10028G06T2207/20084G06V10/462G06F18/241Y02T10/40
Inventor 杨绿溪郑亚茹宋涣赵清玄邓亭强
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products