Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-camera-array depth perception method

A multi-camera, depth-sensing technology, applied in the field of depth perception of multi-camera arrays, can solve the problems of depth map resolution, accuracy, distance and real-time performance that are difficult to meet application requirements, single camera receiving mode, etc.

Inactive Publication Date: 2014-05-28
XI AN JIAOTONG UNIV
View PDF5 Cites 48 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] At present, the depth sensing devices developed by Microsoft and Apple all adopt a single camera receiving mode, and are mainly suitable for consumer electronics. It is difficult to meet the requirements of unmanned vehicle assisted driving and high-speed machine tools in terms of depth map resolution, accuracy, distance and real-time performance. Application requirements in processing, industrial 3D modeling, 3D printing and other fields

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-camera-array depth perception method
  • Multi-camera-array depth perception method
  • Multi-camera-array depth perception method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The present invention will be further described in detail below in conjunction with specific examples.

[0024] In general, the multi-camera array depth perception method of the embodiment of the present invention uses a laser speckle projector or other projection device to project a fixed pattern, encodes the space with structured light, and then uses multiple cameras on the same baseline to obtain the projected pattern. Through the respective depth calculation and depth map fusion, high-resolution and high-precision image depth information (distance) is generated for target recognition or motion capture of three-dimensional images.

[0025] figure 1 It schematically illustrates the overall flow of the multi-camera array depth perception method according to the embodiment of the present invention. For clarity, the following will combine figure 2 , image 3 , Figure 4 , Figure 5 , Figure 6 , Figure 7 , Figure 8 to describe the method.

[0026] Step 1. Ado...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-camera-array depth perception method. The multi-camera-array depth perception method includes the following steps that: a laser speckle projector or other projection devices are adopted to project a constant pattern, and structured light coding is performed on space; multiple cameras in the same baseline are adopted to obtain the projected pattern; depth is calculated through two kinds of block matching motion estimation methods which includes block matching calculation between input images and reference images and double-target block matching calculation between every two input images; and depth image fusion is performed among a plurality of depth images according to detected distance ranges and projection shadows, and the interference of shadows and noises is eliminated, and high-resolution and high-accuracy image depth information can be generated. The method is easy to realize by adopting hardware, and can improve the accuracy of depth measurement and expand the range of the depth measurement.

Description

technical field [0001] The invention belongs to the technical fields of image processing, human-computer interaction and machine vision, and in particular relates to a depth perception method of a multi-camera array. Background technique [0002] Vision is the most direct and main way for human beings to observe and perceive the world. We live in a three-dimensional world. Human vision can not only perceive the brightness, color, texture information and movement of the object surface, but also judge its shape, space and spatial position (depth, distance). How to enable machine vision to obtain high-precision 3D depth information in real time and improve the intelligence level of the machine is a difficult point in the current research of machine vision systems. [0003] In the industrial field, high-resolution, high-precision 3D depth information has a wide range of application requirements in the fields of automotive assisted safe driving, high-speed machine tool processin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T15/10
Inventor 葛晨阳华刚郑南宁姚慧敏张晨
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products