A visual depth estimation method based on depth-differentiable convolutional neural network

A convolutional neural network, convolutional network technology, applied in the field of monocular visual depth estimation, can solve the problems of small share, insufficient feature diversity, loss of object edge information, etc.

Active Publication Date: 2019-01-04
牧野微(上海)半导体技术有限公司
View PDF8 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Laina et al. proposed a depth estimation neural network model based on the fully convolutional residual network. The entire process of the model from the original image input to the predicted depth map output is one-way, although the depth estimation neural network is deep enough and collects Some high-accuracy feature information is obtained, but the share of these high-accuracy feature information in the overall feature information is very small, and due to the singleness of the model, the diversity of features extracted by the model is also insufficient. , during the one-way and long feature collection process, the edge information of the object in the image will be lost, which may lead to a decrease in the overall prediction accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A visual depth estimation method based on depth-differentiable convolutional neural network
  • A visual depth estimation method based on depth-differentiable convolutional neural network
  • A visual depth estimation method based on depth-differentiable convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0034] A visual depth estimation method based on a deep separable convolutional neural network proposed by the present invention includes two processes of a training phase and a testing phase.

[0035] The specific steps of the described training phase process are:

[0036] Step 1_1: Select N original monocular images and the real depth images corresponding to each original monocular image, and form a training set, and record the nth original monocular image in the training set as {Q n (x,y)}, combine the training set with {Q n (x,y)} corresponds to the real depth image is recorded as Among them, N is a positive integer, N≥1000, such as N=4000, n is a positive integer, 1≤n≤N, 1≤x≤R, 1≤y≤L, R means {Q n (x,y)} and The width of L means {Q n (x,y)} and height, R and L are divisible by 2, Q n (x,y) means {Q n The pixel value of the pixe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual depth estimation method based on a depth-deconvolution neural network. The method comprises steps: a depth-deconvolution neural network is constructed firstly, whereinthe hidden layer includes a convolution layer, a batch normalization layer, an activation layer, a maximum pool layer, a conv_block network block, a deep convolution network block, a concatanate fusion layer, an add fusion layer, a deconvolution layer and a separable convolution layer; then, the monocular images in the training set are used as the original input images and input to the depth-deconvolution neural network for training to obtain the estimated depth images corresponding to the monocular images; secondly, by calculating the loss function between the estimated depth image and the real depth image corresponding to the monocular image in the training set, the training model and the optimal weight vector of the depth-differentiable convolutional neural network are obtained; then the monocular image to be predicted is inputted into the depth-deconvolution neural network training model, and the corresponding predicted depth image is predicted by using the optimal weight vector.The advantage is that the prediction accuracy is high.

Description

technical field [0001] The invention relates to a monocular visual depth estimation technology, in particular to a visual depth estimation method based on a depth-separable convolutional neural network. Background technique [0002] In today's rapid development era, with the continuous improvement of the material living standard of the society. Artificial intelligence technology is used in more and more aspects of people's daily life. Computer vision tasks, as one of the representatives of artificial intelligence, have also been paid more and more attention by people. As one of the computer vision tasks, monocular vision depth estimation is becoming more and more important in the technology of assisted driving. [0003] Automobile is one of the indispensable means of transportation for people to travel today, and its development has always been valued by society. Especially with the maturity of artificial intelligence technology, unmanned driving, a representative artifici...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/50G06N3/04G06N3/08
CPCG06N3/08G06T7/50G06T2207/10004G06N3/045
Inventor 周武杰袁建中吕思嘉钱亚冠向坚张宇来
Owner 牧野微(上海)半导体技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products