Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Conditional generative adversarial network-based monocular image depth estimation method

A conditional generation, image depth technology, applied in image enhancement, image analysis, image data processing and other directions, can solve the problem of time-consuming, lack of generality, etc., to achieve the effect of improving quality

Inactive Publication Date: 2018-09-21
TIANJIN UNIV
View PDF5 Cites 60 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It is not difficult for humans to infer the underlying 3D structure from a single image, but it remains a challenging task for computer vision algorithms since there are no specific and reliable features such as geometric information that can be directly exploited
[0003] The current research on depth estimation based on monocular images is mainly divided into three categories: one is the research on depth estimation in scenes with geometric constraints, such methods directly map image intensity or color information to depth values, and in natural scenes It is not universal; the second is to add other information to the input features of depth estimation, such as user annotations and semantic annotations, but such methods rely on manual labeling of images, which is time-consuming; the third is to use deep learning methods to train A Convolutional Neural Network (CNN), which enables it to directly learn the mapping relationship between the monocular image and the depth map, so that it can directly fit the depth image from the color plane image, the current best method in the field of depth estimation Most of these methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Conditional generative adversarial network-based monocular image depth estimation method
  • Conditional generative adversarial network-based monocular image depth estimation method
  • Conditional generative adversarial network-based monocular image depth estimation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention more clear, the specific implementation manners of the present invention will be further described below in conjunction with the embodiments and accompanying drawings.

[0025] Monocular image depth estimation is an ill-posed problem, and countless depth images can be obtained from a single color image. In recent years, a common practice is to use a deep convolutional neural network to directly regress with the real depth image in a certain distance space, but the final result obtained by this method is the average of all possible depth information, so the image is usually blurred. The present invention utilizes a generated confrontation network and a discriminator to judge whether the generated depth map is a scene image corresponding to the original color image, which can better solve the shortcomings of the existing methods.

[0026] The specific technical det...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a conditional generative adversarial network-based monocular image depth estimation method. The method comprises the following steps of: (1) preprocessing a data set, (2) constructing a generator in a generative adversarial network: constructing the generative adversarial network formed by a convolution layer and deconvolution layer by utilizing an encoder-decoder structure, constructing a jump connection structure on the basis, mapping each layer of output feature map of an encoder into an input of a symmetric decoder, connecting the output feature maps through a channel dimensionality so as to increase detail information of the decoder, and sharing information of low layers between an input layer and an output layer to ensure that high-layer output images has low-level detail features and enhance the quality of the generated depth images; (3) constructing a judger in the generative adversarial network; (4) constructing a loss function of the generative adversarial network; and (5) training and testing the constructed generative adversarial network.

Description

technical field [0001] The invention relates to the technical field of monocular image depth estimation, in particular to a depth estimation method based on a generative confrontation network. Background technique [0002] Depth information can reflect geometric information that 2D images do not have, and is of great significance for 3D scene reconstruction, gesture recognition, human body pose estimation, etc. [1] . At present, there are two main ways to obtain depth information: one is to use hardware devices such as lidar and Kinect to directly obtain distance information; the other is to use multi-viewpoints, such as binocular images, to estimate depth information by using parallax. Due to the high cost of the depth sensor, the multi-view method needs to configure multiple image acquisition devices. Therefore, estimating the depth estimation of natural scenes from a single monocular image is of great significance in the fields of scene understanding, 3D modeling, and ro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/50
CPCG06T2207/10024G06T2207/10028G06T2207/20081G06T2207/20084G06T7/50
Inventor 侯春萍管岱杨阳郎玥章衡光
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products