Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map

A depth sensor and semantic map technology, applied in the field of robot vision scene understanding, can solve the problem of lack of semantic understanding of maps

Active Publication Date: 2015-06-24
UNIV OF SCI & TECH OF CHINA
View PDF3 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these technologies only use depth sensing capabilities to build indoor 3D environmental structural maps, and lack semantic understanding in the maps, such as where is the wall, where is the table, etc.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map
  • Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map
  • Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0033] figure 1 It is a flowchart of a depth sensor-based indoor 3D semantic map construction method provided by an embodiment of the present invention. Such as figure 1 As shown, the method mainly includes the following steps:

[0034] Step 11, using the depth sensor to collect RGB-D images of the indoor environment, and constructing an indoor 3D map.

[0035] In the embodiment of the present invention, the indoor environment can be scanned by ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth sensor-based method of establishing an indoor 3D (three-dimensional) semantic map. The method includes: using a depth sensor to acquire a color depth RGB-D image of an indoor environment to establish an indoor 3D map thereby; segmenting the acquired RGB-D image, and calculating color and shape features of the segmented RGB-D image to acquire corresponding semantic information; fusing the acquired semantic information and the indoor 3D map to obtain the indoor 3D semantic map. The method has the advantage that the method is suitable for establishing semantic information, such as structural semantic information and furniture semantic information, so as to facilitate a robot executing high-level intelligent operations.

Description

technical field [0001] The invention relates to the technical field of robot visual scene understanding, in particular to a depth sensor-based indoor 3D semantic map construction method. Background technique [0002] The semantic perception of robots is the core and crucial technology of indoor service robots. Traditional robots have limitations in building indoor maps with lasers. On the one hand, the map created by the laser is a 2D map. Due to the lack of effective 3D information, it can only avoid ground objects when moving and avoiding obstacles, and cannot avoid objects with a certain height. On the other hand, the robot can only use the map created by the laser to perform some low-level operations, such as obstacle avoidance, movement, and path planning. Robots don't really understand their surroundings. For domestic service robots, it is crucial to truly understand the environment and the needs of users, which is also the goal of artificial intelligence and one of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/00
Inventor 赵哲陈小平
Owner UNIV OF SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products