Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for semantic segmentation of scene point cloud

A semantic segmentation and scene point technology, applied in the field of computer vision, can solve the problems of understanding data resolution limitations, difficult to deal with large-scale dense point clouds, and local features are not robust enough to achieve remarkable results.

Active Publication Date: 2019-03-01
DALIAN UNIV OF TECH
View PDF2 Cites 62 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0013] In order to solve the technical problems such as the traditional point cloud scene understanding is easily limited by data resolution, local features are not robust enough, and it is difficult to handle large-scale dense point clouds, a large-scale dense scene point cloud semantic segmentation model based on deep learning technology is designed. The framework, for the input of large-scale dense scene point cloud, can convert the three-dimensional information of the point cloud into two-dimensional information that can be directly processed by convolution without losing information, and combine the technology of image semantic segmentation to complete the point cloud The task of semantic segmentation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for semantic segmentation of scene point cloud
  • A method for semantic segmentation of scene point cloud
  • A method for semantic segmentation of scene point cloud

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] The invention will be described in further detail below in conjunction with specific embodiments, but the present invention is not limited to specific embodiments.

[0057] A method for semantic segmentation of large-scale dense scene point clouds based on deep learning, including the training of the network model and the operation steps of the model.

[0058] 1. Training network model

[0059] To train the semantic segmentation network for this large-scale dense scene point cloud, it is first necessary to prepare sufficient point cloud data. Each scene point cloud sample should contain RGBXYZ and semantic category information to which each point belongs. Taking the S3DIS indoor scene dataset as an example, after data enhancement, a total of 2654 scene point cloud samples are used as the training set, and 578 samples are used as the verification set.

[0060] After obtaining enough data sets, it is first necessary to convert the features of each point into RGBDHN info...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of computer vision. A method for semantic segmentation of scene point cloud is provided, a framework of point cloud semantic segmentation model for large-scale dense scene based on depth learning technology is designed, For the input large-scale dense scene point cloud, it can transform the three-dimensional information of point cloud into two-dimensional information which can be processed directly by convolution without losing the information, and complete the task of point cloud semantic segmentation combined with image semantic segmentation technology. In this framework, we can effectively solve the semantic segmentation task of large-scale dense scene point cloud. The semantic segmentation result of the scene point cloud obtained by the method of the invention can be directly used in tasks such as robot navigation, automatic driving and the like. And this method is especially effective in non-synthetic natural scenes.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a method for semantically segmenting large-scale dense point cloud scenes based on deep learning. Background technique [0002] The use of convolutional neural networks to process 2D images dominates the development of modern computer vision. A key factor for its success is the efficient processing of convolutions on images. Convolutions are defined on a regular grid in the image, which enables extremely efficient implementation of convolution operations. This property enables the use of powerful deep architectures for processing large datasets with high resolution. [0003] When analyzing large-scale 3D scenes, a straightforward extension of the above approach is to perform 3D convolutions on a voxel grid. However, this voxel-based approach has significant limitations, including issues such as cubic growth of memory consumption and computational efficiency...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T15/30G06N3/04G06N3/08
CPCG06N3/08G06T15/30G06N3/045
Inventor 李坤杨鑫尹宝才张强魏小鹏
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products