Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Mobile robot image visual positioning method in dynamic environment

A mobile robot and image vision technology, applied in the field of visual positioning and navigation, can solve problems such as poor algorithm robustness, achieve the effects of correcting positioning errors, overcoming complexity, and accurate motion trajectories

Active Publication Date: 2018-08-28
ZHEJIANG UNIV
View PDF9 Cites 33 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problems existing in the background technology, the present invention proposes a mobile robot image visual positioning method in a dynamic environment to solve the problem of poor robustness of the existing algorithm in a dynamic environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mobile robot image visual positioning method in dynamic environment
  • Mobile robot image visual positioning method in dynamic environment
  • Mobile robot image visual positioning method in dynamic environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The technical solution of the present invention will be described in further detail below in conjunction with the accompanying drawings.

[0037] In step 1, the Kinect depth camera is fixedly installed on the front of the wheeled mobile robot, and each frame image of the environment where the robot is located is obtained through the camera, and each frame image includes an RGB color image and a depth depth image.

[0038] Step 2, according to the pinhole camera model, obtain the three-dimensional point cloud containing the environmental information of each frame depth depth image for each frame depth depth image processing of two adjacent frames before and after;

[0039] Such as figure 2 As shown, the coordinates of the pixel points in the image coordinate system after the spatial point is imaged by the camera are [u v], and the coordinates in the world coordinate system are [x y z], and the world coordinate system is the reference frame coordinate system or the curre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mobile robot image visual positioning method in a dynamic environment. A Kinect depth camera is fixed to a mobile robot, and for the two adjacent frames of images, the three-dimensional point cloud of the depth image is obtained and a matching point pair of the RGB color image is obtained; an initial pose transformation matrix between the two adjacent frames of images isobtained through an iterative nearest point algorithm; the three-dimensional point cloud clustering transformation is carried out, and the residual difference between the three-dimensional point cloudclusters of the two adjacent frames of images is calculated; the matching point pair and the three-dimensional point cloud in a static background repeatedly processed to obtain a more accurate pose transformation matrix, the pose transformation matrix between every two adjacent frames of images is calculated, and the image visual positioning of the mobile robot is realized. According to the invention, the hardware cost is reduced, the complexity of recovering the pixel depth value is overcome, the positioning error of the robot is corrected, and the moving track of the moving robot in the room is estimated more accurately.

Description

technical field [0001] The invention belongs to the field of visual positioning and navigation, in particular to a mobile robot image visual positioning method in a dynamic environment. Background technique [0002] With the rapid development of robotics, mobile robots are widely used in industry. In traditional positioning methods, absolute positioning mainly uses navigation beacons, signs and satellite navigation technology for positioning, but the construction and maintenance costs of beacons and signs are relatively high, and GPS can only be used outdoors. Relative positioning mainly uses wheel encoders, inertial measurement units and laser radars. Both wheel encoders and inertial measurement units measure some indirect physical quantities. High-precision laser radars are too expensive. These factors give autonomous positioning mobile robots The commercialization and entry into daily life has brought many challenges and difficulties. [0003] In recent years, the visua...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/136G06T7/194G06T7/33G06T7/38G06T7/80G01C21/00
CPCG01C21/00G06T7/136G06T7/194G06T7/337G06T7/38G06T7/80
Inventor 何再兴杨勤峰赵昕玥张树有谭建荣
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products