Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for semantic completion of single depth map point cloud scene

A depth map and completion technology, applied in image enhancement, image analysis, image data processing, etc., can solve problems such as inability to achieve a complete point cloud

Active Publication Date: 2020-12-04
DALIAN UNIV OF TECH
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods all use point clouds as input, directly process 3D points through different networks, and cannot achieve the effect of generating a complete point cloud with semantic labels from a local depth map.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for semantic completion of single depth map point cloud scene
  • Method for semantic completion of single depth map point cloud scene
  • Method for semantic completion of single depth map point cloud scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051] The specific implementation manners of the present invention will be further described below in conjunction with the drawings and technical solutions.

[0052] In this embodiment, a training set and a test set are generated based on the SUNCG data set. 1590 scenes are randomly selected for rendering, 1439 scenes are used for DQN training, and the rest are used for DQN testing. In order to train the segmentation completion and depth map completion network, 5 or 6 viewpoints defined in the action space are randomly selected, and they are applied to the above-mentioned 1590 scenes to render more than 10,000 sets of depth maps and semantic segmentation truth. value. Wherein, choose one thousand groups to test the present invention.

[0053] The present invention includes four main components, which are depth map semantic segmentation network, voxel completion network, segmentation completion network and depth map completion network. All required DCNN networks are impleme...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for semantic completion of a single depth map point cloud scene, and belongs to the field of three-dimensional reconstruction in the field of computer vision. Accordingto the method, viewpoint repairing loopholes are converted in the process of mutual projection of a depth map, a depth segmentation map and point cloud, and high-resolution point cloud completion andsemantic segmentation are carried out at the same time. The problems that voxel representation form resolution is low and a point cloud representation form cannot give consideration to semantic segmentation in the scene semantic completion problem are solved, and a high-resolution geometric structure and semantic information details of the scene can be replenished at the same time by performing scene semantic completion on the three-dimensional point cloud; based on a single depth map, tasks of three-dimensional point cloud completion and semantic segmentation can be completed at the same time; effectiveness of semantic information and three-dimensional geometrical information constraints on semantic completion of the point cloud scene is verified.

Description

technical field [0001] The invention belongs to the field of three-dimensional reconstruction (3D Reconstruction) in the field of computer vision. The specific realization result is point cloud semantic completion of indoor scenes, and in particular relates to a method for simultaneously performing surface completion and semantic segmentation. Background technique [0002] The task of semantic scene reconstruction in 3D reconstruction is the process of recovering 3D scenes from 2D images and obtaining semantic information. With the help of depth information, 3D scene reconstruction can be more accurate and reliable. However, the collected depth map is often incomplete due to occlusion and fixed viewpoint, so it is very important to understand and reconstruct the local depth map. From the early voxel method, to the end-to-end deep convolutional neural network architecture, to the fusion of RGB and depth information, the method of semantic scene completion has been improving ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/00G06T17/00G06K9/34G06N3/04G06N7/00
CPCG06T17/00G06T2207/10028G06T2207/20081G06T2207/20084G06V10/267G06N7/01G06N3/045G06T5/77
Inventor 杨鑫李童张肇轩尹宝才朴星霖
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products