Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth estimation method and apparatus of monocular video, terminal, and storage medium

A technology of depth estimation and video, applied in the field of image processing, can solve the problems of low accuracy of depth map, inability to estimate and output uncertainty distribution map, etc.

Active Publication Date: 2018-11-06
HISCENE INFORMATION TECH CO LTD
View PDF5 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing supervised learning CNN model for monocular depth estimation can only predict and output the depth map, and cannot estimate and output the uncertainty distribution map corresponding to the depth map at the same time, making the existing monocular depth estimation The accuracy of the depth map on the network model preview side is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth estimation method and apparatus of monocular video, terminal, and storage medium
  • Depth estimation method and apparatus of monocular video, terminal, and storage medium
  • Depth estimation method and apparatus of monocular video, terminal, and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] figure 1 It is a flow chart of a monocular video depth estimation method provided by Embodiment 1 of the present invention. This embodiment is applicable to the case of monocular depth estimation for each image frame in a sequence of video frames, especially for In drones, robots, autonomous driving technology or augmented reality technology, depth estimation is performed on image frames of monocular video, so that the distance between objects can be determined according to the estimated depth map, and it can also be used in other scenes that require monocular video In the application scene of depth estimation. The method can be performed by a monocular video depth estimation device, which can be implemented by software and / or hardware, and integrated in a terminal that needs to estimate depth, such as drones, robots, and the like. The method specifically includes the following steps:

[0030] S110. Acquire the image frame sequence of the monocular video, and calculat...

Embodiment 2

[0090] Figure 6 It is a schematic structural diagram of a monocular video depth estimation device provided by Embodiment 2 of the present invention. This embodiment is applicable to the case of performing monocular depth estimation on each image frame in a sequence of video frames. The device includes: a posture relationship determination module 210 , an initial depth information determination module 220 and a final depth information determination module 230 .

[0091] Wherein, the pose relationship determination module 210 is used to acquire the image frame sequence of the monocular video, and calculates the pose relationship between two adjacent image frames in the sequence according to the camera pose estimation algorithm; the initial depth information determination module 220 is used to Each image frame in the sequence is used as the input of the preset neural network model in turn, and the initial depth map and the initial uncertainty distribution map of each image frame...

Embodiment 3

[0130] Figure 7 It is a schematic structural diagram of a terminal provided in Embodiment 3 of the present invention. see Figure 7 , the terminal includes:

[0131] one or more processors 310;

[0132] memory 320, for storing one or more programs;

[0133] When one or more programs are executed by one or more processors 310, the one or more processors 310 implement the method for estimating the depth of a monocular video as proposed in any one of the above embodiments.

[0134] Figure 7 Take a processor 310 as an example; the processor 310 and the memory 320 in the terminal can be connected through a bus or other methods, Figure 7 Take connection via bus as an example.

[0135] The memory 320, as a computer-readable storage medium, can be used to store software programs, computer-executable programs and modules, such as program instructions / modules corresponding to the depth estimation method of monocular video in the embodiment of the present invention (for example,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a depth estimation method and apparatus of a monocular video, a terminal, and a storage medium. The method includes steps: obtaining an image frame sequence of the monocular video, and calculating an attitude relation between each two adjacent image frames in the sequence according to a camera attitude estimation algorithm; regarding the image frames in the sequence as input of a preset neural network model in sequence, and determining an initial depth map and an initial uncertainty distribution map of each image frame according to output of the presetneural network model; performing inter-frame information transmission and fusion according to the attitude relations and the initial depth maps and the initial uncertainty distribution maps of the image frames, and determining final depth maps and final uncertainty distribution maps of the image frames in sequence. According to the technical scheme, depth restoration of the image frames of the monocular video can be performed, the prediction precision of the depth maps can be improved, and the uncertainty distribution of the depth maps can be obtained.

Description

technical field [0001] Embodiments of the present invention relate to image processing technologies, and in particular to a method, device, terminal and storage medium for depth estimation of monocular video. Background technique [0002] In the field of computer vision research, more and more people are studying the monocular depth estimation method, that is, using the mapping relationship between the hidden visual information in a single image, such as size, shadow, plane, etc., and the real depth value. Do depth estimation. Monocular depth estimation has many applications, such as scene understanding, semantic segmentation, 3D modeling, robot obstacle avoidance, etc. Traditional monocular estimation methods mainly rely on Structure-from-Motion (SfM) technology, or Simultaneous Localization and Mapping (SLAM) technology based on monocular cameras, which is widely used in the field of robotics. SfM and SLAM use multi-view images to estimate the attitude of the monocular c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55
CPCG06T2207/10028G06T2207/20221G06T7/55
Inventor 不公告发明人
Owner HISCENE INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products