Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Convolutional network three-dimensional model reconstruction method based on multi-view cost volume

A technology of convolutional network and 3D model, applied in the field of 3D model reconstruction of convolutional network based on multi-view cost volume, can solve the problems of unsatisfactory realization effect and susceptibility to the influence of multiple modes, so as to improve rationality and guarantee irrelevant effect

Pending Publication Date: 2022-07-08
NANJING UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the differentiable arg-min operation is easily affected by multiple modes, and the implementation effect is not ideal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional network three-dimensional model reconstruction method based on multi-view cost volume
  • Convolutional network three-dimensional model reconstruction method based on multi-view cost volume
  • Convolutional network three-dimensional model reconstruction method based on multi-view cost volume

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0081] The objective tasks of the present invention are as follows figure 2 , image 3 , Figure 4 and Figure 5 as well as Image 6 shown, figure 2 is the input multi-view image, image 3 is the specific information of the camera parameters, Figure 4 is the voxel quantized representation for supervision, Figure 5 is the voxel visualization representation for supervision, Image 6 For the voxel results reconstructed by the network, the structure of the whole method is as follows Figure 7 shown. Each step of the present invention will be described below according to examples.

[0082] In step (1), a feature map of the input multi-view image data is extracted through a weight-sharing encoding network. Taking 3 perspectives as an example, it is divided into the following steps:

[0083] Step (1.1), the input dataset (taken from the 13 categories of the ShapeNet dataset) has a total of 20 rendering images for each sample (the format is .png, and the width, height a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a convolutional network three-dimensional model reconstruction method based on multi-view cost volume, and the method comprises the steps: extracting a feature map of input multi-view picture data through a coding network sharing the weight; according to a parallel plane of a reference camera viewpoint, the feature maps are distorted to different depths, and a feature volume corresponding to each feature map is obtained; fusing the plurality of feature volumes into one cost volume by using variance-based cost measurement; and carrying out noise reduction processing on the obtained cost volume by adopting 3D grid reasoning, reconstructing a final voxel, and completing convolutional network three-dimensional model reconstruction based on the multi-view cost volume. According to the method, three-dimensional grid reasoning is used, geometric constraints are added, and high-quality output is generated; each view angle is used as a primary reference view angle, so that the independence between the final result and the input is ensured, and the rationality of the network is improved; a new depth value is selected such that the shape of the resulting result coincides with the shape used to supervise.

Description

technical field [0001] The invention relates to a three-dimensional model reconstruction method, in particular to a convolutional network three-dimensional model reconstruction method based on multi-view cost volume. Background technique [0002] In computer vision, 3D reconstruction refers to the process of depth data acquisition, preprocessing, point cloud registration and fusion, surface generation, etc., to describe the real scene into a mathematical model that conforms to computer logic. In recent years, with the explosive growth and availability of 3D data, and the emergence of deep learning methods, 3D reconstruction has become a hot field. The focus of 3D reconstruction technology is how to obtain the depth information of the target scene or object. Under the condition that the depth information of the scene is known, the three-dimensional reconstruction of the scene can be realized only through the registration and fusion of the point cloud data. The deep applicat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06N3/04G06N3/08
CPCG06T17/00G06N3/08G06N3/045
Inventor 张岩谢吉雨贾晓玉张化鹏郑鹏飞何振刘琨皋婕刘馨蓬
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products