Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A three-dimensional reconstruction method, apparatus, device and storage medium

A technology of 3D reconstruction and 3D grid, which is applied in 3D modeling, image analysis, image enhancement, etc., can solve the problem that GPU cannot be portable, and achieve the effect of improving portability and reducing complexity

Active Publication Date: 2020-12-15
SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, GPU cannot be portable, and it is difficult to be applied to mobile robots, portable devices and wearable devices (such as augmented reality headset Microsoft HoloLens) and other devices

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A three-dimensional reconstruction method, apparatus, device and storage medium
  • A three-dimensional reconstruction method, apparatus, device and storage medium
  • A three-dimensional reconstruction method, apparatus, device and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] figure 1 It is a flow chart of a 3D reconstruction method provided by Embodiment 1 of the present invention. This embodiment is applicable to the situation of real-time 3D reconstruction of a target scene based on a depth camera. The method can be performed by a 3D reconstruction device, wherein the The device can be implemented by software and / or hardware, and can be integrated into a smart terminal (mobile phone, tablet computer) or a three-dimensional visual interaction device (VR glasses, wearable helmet). Such as figure 1 As shown, the method specifically includes:

[0031] S110. Using a preset fast global optimization algorithm, determine a relative camera pose of the key frame of the current depth image relative to the key frame of the preset depth image.

[0032] Preferably, the key frame of the current depth image corresponding to the current target scene can be acquired based on the depth camera. Wherein, the target scene may preferably be an indoor space s...

Embodiment 2

[0096] figure 2 It is a flow chart of a three-dimensional reconstruction method provided by Embodiment 2 of the present invention. This embodiment is further optimized on the basis of the foregoing embodiments. Such as figure 2 As shown, the method specifically includes:

[0097] S210. Using a preset fast global optimization algorithm, determine a relative camera pose of the key frame of the current depth image relative to the key frame of the preset depth image.

[0098] S220. Divide the key frame of the current depth image into multiple grid voxels according to the preset grid voxel unit, and divide the multiple grid voxels into at least one spatial block.

[0099]Wherein, the preset grid voxel unit may preferably be based on the accuracy of the 3D model required for real-time 3D reconstruction. For example, to achieve 3D reconstruction of a 3D model based on a CPU frequency of 30HZ and a grid voxel precision of 5mm, you can use 5mm as the preset grid voxel unit to con...

Embodiment 3

[0119] image 3 It is a flowchart of a three-dimensional reconstruction method provided by Embodiment 3 of the present invention. This embodiment is further optimized on the basis of the foregoing embodiments. Such as image 3 As shown, the method specifically includes:

[0120] S310. Using a preset fast global optimization algorithm, determine a relative camera pose of the key frame of the current depth image relative to the key frame of the preset depth image.

[0121] S320. Divide the key frame of the current depth image into multiple grid voxels according to the preset grid voxel unit, and divide the multiple grid voxels into at least one spatial block.

[0122] S330. For each spatial block, according to the relative camera pose and the depth value in the key frame of the current depth image, respectively calculate the distance from the grid voxel corresponding to each vertex to the surface of the target scene.

[0123] S340. Select space blocks whose distances from gr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Disclosed are a three-dimensional reconstruction method, apparatus, and device, and a storage medium. The three-dimensional reconstruction method comprises: determining a relative camera pose of a current depth image key frame with respect to a preset depth image key frame by using a preset fast global optimization algorithm; determining at least one effective space block corresponding to the current depth image key frame by using a sparse sampling method; fusing the at least one effective space block with a first three-dimensional grid model corresponding to a previous depth image key frame on the basis of the relative camera pose, to obtain a second three-dimensional grid model corresponding to the current depth image key frame; and generating an equivalent contour surface of the second three-dimensional grid model by using an accelerated marching cubes algorithm, to obtain a three-dimensional reconstruction model of a target scene.

Description

technical field [0001] Embodiments of the present invention relate to the technical fields of computer graphics and computer vision, and in particular, to a three-dimensional reconstruction method, device, device and storage medium. Background technique [0002] Real-time 3D reconstruction is a hot topic in the field of computer vision and robotics. It uses specific devices and algorithms to reconstruct the mathematical models of 3D objects in the real world in real time. It has great potential in human-computer interaction, path planning, and machine perception. great practical application value. [0003] Existing real-time 3D reconstruction algorithms are generally based on depth cameras (RGB-D cameras), and in order to ensure the quality, global consistency and real-time performance of reconstruction results, real-time 3D reconstruction methods usually have a large amount of computation, and high-performance GPU to realize the reconstruction of 3D model. However, the GP...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/70G06T17/10G06T17/20
CPCG06T17/10G06T17/20G06T2207/10028G06T2207/20221G06T2207/30244G06T7/70
Inventor 方璐韩磊
Owner SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products