Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual odometer realization method based on fusion of RGB and depth information

A visual odometry and implementation method technology, applied in computing, image data processing, instruments, etc., to achieve the effect of broadening application time and space, breaking the dependence of lighting conditions, and accurate and reliable motion estimation results

Inactive Publication Date: 2016-09-14
CHINA UNIV OF MINING & TECH
View PDF6 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The present invention is proposed in view of the above-mentioned problems, and the purpose is to provide a method for implementing a visual odometry that fuses RGB and Depth information, and integrates 2D and 3D image matching methods to solve the limitations of the existing visual odometry

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual odometer realization method based on fusion of RGB and depth information
  • Visual odometer realization method based on fusion of RGB and depth information
  • Visual odometer realization method based on fusion of RGB and depth information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Below in conjunction with accompanying drawing and embodiment the present invention is described in further detail:

[0024] Such as figure 1 As shown, a kind of realization method of the visual odometer of fusion RGB and Depth information comprises the following steps:

[0025] 1) Take the time T as the cycle, use the Kinect sensor to collect environmental information, and output a sequence of RGB images and Depth images;

[0026] 2) According to the order of the time axis, select RGB images in turn and Depth image Depth image Point cloud image converted to 3D pcd format

[0027] 3) For the selected RGB image Perform brightness, color shift and blur detection to judge its image quality β. Calculate the brightness parameter, color shift parameter, and blur parameter. If the brightness parameter L=1, the color shift parameter C=1, and the blur parameter F=1, the RGB image quality is good, and β=1; otherwise, the RGB image quality is poor , β=0;

[0028] 4) ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a visual odometer realization method based on fusion of RGB and depth information. According to the existing visual odometer realization method, only environmental RGB information can be obtained based on a monocular or binocular camera and three-dimensional information of the environment can not be obtained directly, so that the application environment of the method is limited and the precision is high. Besides, an RGB-image-matching-based positioning method has characteristics of mature technology and fast processing speed; and a Depth-image-matching positioning method has the characteristic of high environment changing robustness. According to the invention, with combination of the advantages of the RGB-image-matching-based positioning method and the Depth-image-matching positioning method, an RGB-D sensor is used for obtaining RGB and Depth information of a scene simultaneously and a visual odometer realization method based on fusion of RGB and depth information is provided. Information of 2D and 3D modes is utilized reasonably; the dependence on the illumination condition by the visual system is broken; precision, robustness, and practicability of the speedometer system can be improved substantially; and the application time and space for the mobile robot can be expanded.

Description

technical field [0001] The invention belongs to the field of autonomous navigation and positioning of mobile robots, and in particular relates to a method for realizing a visual odometer fused with RGB and Depth information. Background technique [0002] The odometer plays a vital role in the process of robot navigation and positioning. Visual odometer is a method that relies on visual information to measure the distance and direction of the robot's movement. It solves the cumulative error caused by the driving wheel idling or slipping and the measurement error caused by inertial navigation drift. It relies on visual input information and has a large amount of information. The power consumption is small, and no prior information of the scene and motion is required, which is an effective supplement to the traditional method. [0003] At present, visual odometry mainly relies on the image sequence obtained by monocular or binocular cameras, and obtains the motion parameters o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06T7/20G06T7/40
CPCG06T2207/30241
Inventor 缪燕子许红盛金慧杰金鑫卜淑萍李晓东周笛
Owner CHINA UNIV OF MINING & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products