Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Single-image three-dimensional reconstruction method based on deep learning video supervision

A deep learning and 3D reconstruction technology, applied in the field of 3D reconstruction, can solve problems such as expensive, difficult to obtain 3D data, time-consuming and memory-intensive input sequence sensitivity, etc.

Pending Publication Date: 2020-11-17
NANJING UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although these reconstruction methods can obtain ideal 3D reconstruction results, due to the difficulty and cost of obtaining 3D data, the input constraints of multi-image correlation, the time-consuming and memory-consuming operation process, and the sensitivity to input order, etc., All the above methods have disadvantages

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Single-image three-dimensional reconstruction method based on deep learning video supervision
  • Single-image three-dimensional reconstruction method based on deep learning video supervision
  • Single-image three-dimensional reconstruction method based on deep learning video supervision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] Such as figure 1 As shown, a three-dimensional reconstruction method based on deep learning video supervision disclosed by the present invention is specifically implemented according to the following steps:

[0060] 1. Build an object pose prediction module

[0061] Input: Video sequence of objects

[0062] Output: The predicted object pose for each frame

[0063] 1.1 Building a pose prediction network includes constructing an object pose prediction network model G

[0064] The object pose prediction network G includes an encoder and a decoder, and the trainable parameters in each layer of the network are denoted as θ G ; The encoder part includes a 3×3 nine-layer convolutional layer, the convolutional layer is connected to the batch norm pooling layer, and ReLU is selected as the activation function, and then two fully connected layers are connected, and ReLU is selected as the activation function, and finally the pair of input encoding. The decoder part contains ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a single-image three-dimensional reconstruction method based on deep learning video supervision. The single-image three-dimensional reconstruction method comprises the followingsteps: 1) constructing an object pose prediction module, wherein the position of camera shooting relative to an object can be obtained according to the object in an input image; 2) constructing an object three-dimensional shape estimation module: obtaining a three-dimensional point cloud of an input single object image through iterative optimization loss prediction; 3) constructing a multi-frameshape fusion module: inputting a video graph line sequence into the above two modules in parallel to obtain single-frame camera pose and three-dimensional shape prediction, and obtaining more accurateprediction through multi-frame weight fusion, consistency constraint and smoothness constraint; and 4) arranging an overall training framework which comprises three stages of data preprocessing, model framework training and testing. End-to-end three-dimensional reconstruction is realized, training can be performed by using a video sequence, and the three-dimensional point cloud can be predicted only by using a single image in a test stage.

Description

[0001] technology neighborhood [0002] The invention belongs to the technical field of three-dimensional reconstruction, and in particular relates to a three-dimensional single-image reconstruction method based on deep learning video supervision. Background technique [0003] In recent years, with the development of deep learning, the solution to computer vision problems has been developed to a greater extent. Recently, various 2D image processing techniques have been gradually perfected and applied to 3D problems, and the reconstruction of the 3D shape of objects has also become one of the hot issues. Many previous methods require complete 3D model data for supervision, but such data is scarce and the acquisition process is complex and expensive. As a result, many multi-image and single-image reconstruction methods have emerged. Due to the weakening of supervisory information, it will lead to the reduction of local detail accuracy and the ambiguity of object perspective. A...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08G06Q10/04G06T17/00
CPCG06Q10/04G06T17/00G06N3/08G06V20/49G06N3/045G06F18/25G06F18/214
Inventor 孙正兴仲奕杰武蕴杰宋有成
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products