Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Systems and methods for depth estimation via affinity learned with convolutional spatial propagation networks

A space propagation and depth technology, applied in neural learning methods, biological neural network models, calculations, etc., can solve the problem of predicting the blurred depth

Active Publication Date: 2020-04-24
BAIDU COM TIMES TECH (BEIJING) CO LTD +1
View PDF5 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, upon close inspection of the output of some methods, the predicted depth is rather blurry and does not align well with structures in the image such as object contours

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Systems and methods for depth estimation via affinity learned with convolutional spatial propagation networks
  • Systems and methods for depth estimation via affinity learned with convolutional spatial propagation networks
  • Systems and methods for depth estimation via affinity learned with convolutional spatial propagation networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0046] Embodiments can be viewed as an anisotropic diffusion process that learns a diffusion tensor directly from a given image by a deep CNN to guide the refinement of the output.

[0047] 1. Convolutional Spatial Propagation Network

[0048] Given a depth map output from the network and image One task is to update the depth map to a new depth map D within N iteration steps n , which firstly reveals more details of the image and secondly improves the pixel-wise depth estimation results.

[0049] Figure 2B An update operation propagation process using CSPN is shown according to various embodiments of the present disclosure. Formally, without loss of generality, we can embed the depth map Do into some hidden space For each time step t, the convolutional transformation function with kernel size k can be written as:

[0050]

[0051] where κ i,j (0,0)=1-∑ a,b,a,b≠0 kappa i,j (a, b),

[0052]

[0053] Among them, the transformation kernel is spatially depen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Presented are systems and methods for improving speed and quality of real-time per-pixel depth estimation of scene layouts from a single image by using an end-to-end Convolutional Spatial PropagationNetwork (CSPN). An efficient linear propagation model performs propagation using a recurrent convolutional operation. The affinity among neighboring pixels may be learned through a deep convolutionalneural network (CNN). The CSPN may be applied to two depth estimation tasks, given a single image: (1) to refine the depth output of existing methods, and (2) to convert sparse depth samples to a dense depth map, e.g., by embedding the depth samples within the propagation procedure. The conversion ensures that the sparse input depth values are preserved in the final depth map and runs in real-timeand is, thus, well suited for robotics and autonomous driving applications, where sparse but accurate depth measurements, e.g., from LiDAR, can be fused with image data.

Description

technical field [0001] The present disclosure generally relates to systems, devices, and methods for image-based depth estimation, which can be used in various applications, such as augmented reality (AR), autonomous driving, and robotics. Background technique [0002] Depth estimation (i.e., predicting pixel-wise distance to a camera) from a single image is a fundamental problem in computer vision and has many applications ranging from AR, autonomous driving to robotics. Recent efforts to estimate pixel-wise depth from a single image by exploiting deep fully convolutional neural networks (e.g., from large amounts of training data indoors and outdoors) have produced high-quality output. Improvements mainly include utilizing advanced networks such as Visual Geometry Group (VGG) and Residual Networks (ResNet) to more accurately estimate global scene layout and scale, as well as better localization through deconvolution operations, skip connections, and up-projection Structura...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/50
CPCG06T7/50G06T2207/20081G06T2207/20084G06T2207/10004G06N3/084G06N3/045G06N3/08
Inventor 王鹏程新景杨睿刚
Owner BAIDU COM TIMES TECH (BEIJING) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products