Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real scene three-dimensional semantic reconstruction method and device based on deep learning and storage medium

A deep learning and three-dimensional technology, applied in the field of remote sensing surveying and mapping geographic information, can solve the problem of inaccurate labeling of multiple scenes, and achieve the effect of improving computing speed, unaffected network performance, and high-precision segmentation

Inactive Publication Date: 2021-11-19
土豆数据科技集团有限公司
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The embodiment of the present application solves the problem of inaccurate labeling of multiple scenes in the prior art by providing a deep learning-based real-scene 3D semantic reconstruction method, and realizes high-precision segmentation when there are many scene objects and serious stacking; and in In large-scale scenarios, the performance of the depth estimation network is not affected, and stable and accurate estimation can be achieved in various scenarios

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real scene three-dimensional semantic reconstruction method and device based on deep learning and storage medium
  • Real scene three-dimensional semantic reconstruction method and device based on deep learning and storage medium
  • Real scene three-dimensional semantic reconstruction method and device based on deep learning and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0064] The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Apparently, the described embodiments are some of the embodiments of the present application, but not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

[0065] As a challenging task, semantic 3D modeling has received extensive attention in recent years. With the help of small UAVs, multi-view and high-resolution aerial images of large-scale scenes can be easily collected. This application proposes a real-scene 3D semantic reconstruction method based on deep learning, which obtains the semantic probability distribution of 2D images through a convolutional neural network; utilizes motion recovery structure (SfM, s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a real scene three-dimensional semantic reconstruction method and device based on deep learning and a storage medium, relates to the technical field of remote sensing surveying and mapping geographic information, and solves the problem of inaccurate multi-scene labeling in the prior art. The method comprises: obtaining anaerial image; carrying out semantic segmentation on the aerial image, and determining a pixel probability distribution diagram; performing motion structure recovery on the aerial image, and determining a camera pose of the aerial image; performing depth estimation on the aerial image, and determining a depth map of the aerial image; and performing semantic fusion on the pixel probability distribution map, the camera pose and the depth map to determine a three-dimensional semantic model. Thus, high-precision segmentation is realized under the conditions of more scene objects, serious stacking and the like is realized; and in a large-scale scene, the performance of the depth estimation network is not affected, stable and accurate estimation can be carried out in various scenes, and compared with other traditional three-dimensional reconstruction algorithms, the semantic three-dimensional reconstruction algorithm constructed by the invention has the advantage that the calculation speed is increased.

Description

technical field [0001] This application relates to the technical field of remote sensing surveying and mapping geographic information, and in particular to a method, device and storage medium for real-scene 3D semantic reconstruction based on deep learning. Background technique [0002] 3D reconstruction and scene understanding is a research hotspot in computer field. 3D models with correct geometry and semantic segmentation are crucial in areas such as urban planning, autonomous driving, machine vision, and more. In urban scenes, semantic labels are used to visualize objects such as buildings, vegetation, and roads. 3D point clouds with semantic labels make 3D maps easier to understand and facilitate subsequent research and analysis. Although 3D semantic modeling has been extensively studied, different ways of extracting semantic information often lead to inconsistent or wrong results during point cloud reconstruction. Compared with two-dimensional images, semantic segme...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06T7/50G06T7/70G06N3/04
CPCG06T7/50G06T7/70G06N3/045
Inventor 何娇王江安
Owner 土豆数据科技集团有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products