Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network training method and system for three-dimensional reconstruction

A neural network training and 3D reconstruction technology, applied in the field of neural network training methods and systems, can solve the problems of low detection efficiency, complex depth information acquisition, time-consuming and labor-intensive problems

Active Publication Date: 2021-04-23
杭州反重力智能科技有限公司
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to obtain the correct ground truth, it is generally necessary to manually calibrate a large number of points, which is time-consuming and laborious
[0003] In recent years, most technologies based on computer vision and deep learning adopt passive methods, such as inputting single or multiple images from a single viewpoint, and deriving scene depth information through two-dimensional features such as light and shade, shadows, etc., but Average reconstruction effect
Others use monocular images marked with detection data to train the target detection neural network. Since the monocular image does not have accurate three-dimensional depth information and scale information, the accuracy of the trained target detection neural network is low; or The depth information of the monocular image can be determined by means of lidar, depth model, etc., and then the depth information and the monocular image marked with detection data can be used to train the target detection neural network. Because the acquisition of depth information is more complicated, the detection of the target object is difficult. The process is also more complicated, the detection time is longer, and the detection efficiency is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network training method and system for three-dimensional reconstruction
  • Neural network training method and system for three-dimensional reconstruction
  • Neural network training method and system for three-dimensional reconstruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the following will clearly and completely describe the technical solutions of the embodiments of the present invention in conjunction with the accompanying drawings of the embodiments of the present invention. Apparently, the described embodiments are some, not all, embodiments of the present invention. Based on the described embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

[0030] In the present invention, terms such as "installation", "connection", "connection" and "fixation" should be interpreted in a broad sense, for example, it can be a fixed connection or a detachable connection unless otherwise clearly specified and limited. , or integrally connected; it may be mechanically connected or electrically...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a neural network training method and system for three-dimensional reconstruction, and the method comprises the steps: carrying out unsupervised calibration of inputted left and right images of a camera, obtaining a plurality of disparity maps through continuously reducing the left and right images of the camera, and respectively calculating the variance of corresponding points of the disparity maps; and acquiring a rough disparity map to train the deep convolutional neural network, so that the latest convolutional neural network DCNN for three-dimensional reconstruction can easily adapt to new conditions in practice, and a large amount of manual calibration is avoided.

Description

technical field [0001] The present invention relates to the field of machine learning, in particular to a neural network training method and system for three-dimensional reconstruction. Background technique [0002] Human eyes take pictures separately and notice the difference that gives us the perception of depth. This difference is called parallax. The parallax is inversely proportional to the depth. Once the parallax information is obtained, the depth information of the object can be calculated through the formula, and the two-dimensional picture becomes three-dimensional. The two calibrated cameras obtain the left image of the camera and the right image of the camera respectively, and the horizontal difference between the corresponding points of the two images is the parallax. A correctly labeled map (ground truth) is required to determine the corresponding point of each point on another map. In order to obtain the correct ground truth, it is generally necessary to man...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/00G06K9/62G06N3/04
Inventor 王荔范睿吕殿斌蒋佳霖余文彬
Owner 杭州反重力智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products