Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method, device, terminal and storage medium for depth estimation of monocular video

A technology of depth estimation and video, which is applied in the field of image processing, can solve the problems of low accuracy of depth maps, inability to estimate and output uncertainty distribution maps, etc., and achieve the effect of improving prediction accuracy

Active Publication Date: 2021-06-11
HISCENE INFORMATION TECH CO LTD
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing supervised learning CNN model for monocular depth estimation can only predict and output the depth map, and cannot estimate and output the uncertainty distribution map corresponding to the depth map at the same time, making the existing monocular depth estimation The accuracy of the depth map on the network model preview side is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method, device, terminal and storage medium for depth estimation of monocular video
  • Method, device, terminal and storage medium for depth estimation of monocular video
  • Method, device, terminal and storage medium for depth estimation of monocular video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] figure 1 It is a flow chart of a monocular video depth estimation method provided by Embodiment 1 of the present invention. This embodiment is applicable to the case of monocular depth estimation for each image frame in a sequence of video frames, especially for In drones, robots, autonomous driving technology or augmented reality technology, depth estimation is performed on image frames of monocular video, so that the distance between objects can be determined according to the estimated depth map, and it can also be used in other scenes that require monocular video In the application scene of depth estimation. The method can be performed by a monocular video depth estimation device, which can be implemented by software and / or hardware, and integrated in a terminal that needs to estimate depth, such as drones, robots, and the like. The method specifically includes the following steps:

[0030] S110. Acquire the image frame sequence of the monocular video, and calculat...

Embodiment 2

[0090] Figure 6 It is a schematic structural diagram of a monocular video depth estimation device provided by Embodiment 2 of the present invention. This embodiment is applicable to the case of performing monocular depth estimation on each image frame in a sequence of video frames. The device includes: a posture relationship determination module 210 , an initial depth information determination module 220 and a final depth information determination module 230 .

[0091] Wherein, the pose relationship determination module 210 is used to acquire the image frame sequence of the monocular video, and calculates the pose relationship between two adjacent image frames in the sequence according to the camera pose estimation algorithm; the initial depth information determination module 220 is used to Each image frame in the sequence is used as the input of the preset neural network model in turn, and the initial depth map and the initial uncertainty distribution map of each image frame...

Embodiment 3

[0130] Figure 7 It is a schematic structural diagram of a terminal provided in Embodiment 3 of the present invention. see Figure 7 , the terminal includes:

[0131] one or more processors 310;

[0132] memory 320, for storing one or more programs;

[0133] When one or more programs are executed by one or more processors 310, the one or more processors 310 implement the method for estimating the depth of a monocular video as proposed in any one of the above embodiments.

[0134] Figure 7 Take a processor 310 as an example; the processor 310 and the memory 320 in the terminal can be connected through a bus or other methods, Figure 7 Take connection via bus as an example.

[0135] The memory 320, as a computer-readable storage medium, can be used to store software programs, computer-executable programs and modules, such as program instructions / modules corresponding to the depth estimation method of monocular video in the embodiment of the present invention (for example,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a monocular video depth estimation method, device, terminal and storage medium. The method includes: obtaining a sequence of image frames of a monocular video, and calculating the pose relationship between two adjacent image frames in the sequence according to a camera pose estimation algorithm; sequentially using each image frame in the sequence as a preset neural network model Input, and determine the initial depth map and initial uncertainty distribution map of each image frame according to the output of the preset neural network model; perform inter-frame information according to each attitude relationship and the initial depth map and initial uncertainty distribution map of each image frame Transfer and fusion, determine the final depth map and the final uncertainty distribution map of each image frame in turn. The technical solution of the embodiment of the present invention can perform depth restoration on the image frame of the monocular video, which not only improves the prediction accuracy of the depth map, but also can obtain the uncertainty distribution of the depth map.

Description

technical field [0001] Embodiments of the present invention relate to image processing technologies, and in particular to a method, device, terminal and storage medium for depth estimation of monocular video. Background technique [0002] In the field of computer vision research, more and more people are studying the monocular depth estimation method, that is, using the mapping relationship between the hidden visual information in a single image, such as size, shadow, plane, etc., and the real depth value. Do depth estimation. Monocular depth estimation has many applications, such as scene understanding, semantic segmentation, 3D modeling, robot obstacle avoidance, etc. Traditional monocular estimation methods mainly rely on Structure-from-Motion (SfM) technology, or Simultaneous Localization and Mapping (SLAM) technology based on monocular cameras, which is widely used in the field of robotics. SfM and SLAM use multi-view images to estimate the attitude of the monocular c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/55
CPCG06T2207/10028G06T2207/20221G06T7/55
Inventor 不公告发明人
Owner HISCENE INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products