Underwater scene depth evaluation method and system based on monocular vision

A scene depth, monocular vision technology, applied in the field of computer vision, can solve the problems of limited practical application scenarios, sensitive to light in the working range, over-reliance on hardware devices, etc., to achieve the effect of saving computation and reducing restricted conditions

Pending Publication Date: 2022-06-21
JIANGSU UNIV OF SCI & TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, in order to realize the reconstruction of underwater 3D scenes, SLAM methods are usually based on depth cameras or binocular cameras to achieve dense reconstruction of 3D scenes. However, the shortcomings of this type of method are obvious, and they rely too much on hardware devices, resulting in limited practical application scenarios.
In particular, the binocular system using a binocular camera needs to use a stereo camera to collect two images at the same time at each moment, which leads to high cost, and also needs to consider the impact of the camera baseline on the system accuracy, which is not conducive to the depth of the underwater scene. Evaluate
[0004] The SLAM method based on a depth camera or a binocular camera can achieve dense reconstruction of a 3D scene, but the shortcomings of this type of method are obvious, and the excessive reliance on hardware devices leads to limited practical application scenarios.
In particular, depth cameras have a limited working range and are sensitive to light, so they are only suitable for close-range indoor environments

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Underwater scene depth evaluation method and system based on monocular vision
  • Underwater scene depth evaluation method and system based on monocular vision
  • Underwater scene depth evaluation method and system based on monocular vision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] see figure 1 shown, figure 1 A flowchart of a method for evaluating depth of underwater scene based on monocular vision provided by the present invention. An embodiment of the present invention provides a monocular vision-based underwater scene depth assessment method, comprising the following steps:

[0046] S1: Acquire sensor data in an underwater dynamic scene, where the sensor data is scene image data captured and collected by a monocular camera under different viewing angles.

[0047]In this embodiment, the scene image data is that a monocular camera is mounted on an underwater device or an underwater robot enters an underwater dynamic scene, and shoots image data in the scene from different rotational perspectives.

[0048] Before the monocular camera captures, it also includes establishing the global coordinate system of the underwater motion scene, establishing the coordinate system by gridding, and establishing the gridded intersection between the pixels imag...

Embodiment 2

[0082] like Figure 4 As shown, in a preferred embodiment provided by the present invention, a monocular vision-based underwater scene depth assessment system includes an acquisition unit 100 , a feature extraction unit 200 , a depth judgment unit 300 and a depth assessment unit 400 . in:

[0083] The acquiring unit 100 is configured to acquire scene image data captured by a monocular camera in different viewing angles in an underwater dynamic scene.

[0084] In this embodiment, each video frame image in the scene image data captured and collected by the acquisition unit 100 corresponds to one angle of view of the monocular camera, and the image data in the scene is captured at different rotational angles of view.

[0085] The feature extraction unit 200 is configured to extract feature points in scene image data from different perspectives, and obtain feature extraction results of objects captured in each video frame.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an underwater scene depth evaluation method and system based on monocular vision. The method comprises the following steps: acquiring scene image data shot and collected by a monocular camera at different visual angles in an underwater dynamic scene; performing feature extraction on video frames in the scene image data, and extracting feature points in the scene image data of different visual angles; performing depth estimation on the feature extraction result of the shot object based on a depth estimation model according to the change of the visual angle position of the camera to obtain a visual depth value of the shot object; performing depth processing on the scene image data based on different view angle pose parameters of the camera to optimize object surface vision depth values of feature points in the scene image data to obtain a scene depth estimation image; the system comprises an acquisition unit, a feature extraction unit, a depth judgment unit, a depth evaluation unit and a registration unit. According to the invention, underwater scene depth evaluation can be completed without considering influence factors of a camera baseline on system precision.

Description

technical field [0001] The invention relates to computer vision technology, in particular to a method and system for evaluating depth of underwater scene based on monocular vision. Background technique [0002] With the continuous exploration of the ocean and other waters, underwater operations, marine scientific investigations and other activities are also increasing. In order to improve the efficiency of underwater operations and achieve the purpose of marine scientific investigations, underwater operation equipment is usually used to assist operations. Due to the existence of various complex underwater scene environments, for the needs of task execution, to improve people's perception of underwater scenes, it is necessary to use technologies such as computer vision and image processing to evaluate the position of underwater operation equipment in unknown environments. The three-dimensional structure of the scene is reconstructed, and the observation of the underwater scen...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/55G06T7/73G06T7/33G06T1/00
CPCG06T7/55G06T7/33G06T1/0007G06T7/73G06T2207/10016G06T2207/10028
Inventor 王红茹刘朝王佳
Owner JIANGSU UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products