Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Bag-of-visual-word-model-based indoor scene cognitive method

A technology of visual bag of words and indoor scenes, applied in the field of indoor scene cognition based on the visual bag of words model, can solve problems such as inability to complete high-intelligence tasks, and achieve the effect of improving the scene recognition rate and ensuring the rapidity of the algorithm

Inactive Publication Date: 2017-03-22
HARBIN ENG UNIV
View PDF7 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The mobile robot moves in the indoor scene, and does not know whether its location belongs to the living room, kitchen or bedroom, so it cannot complete the high-intelligence task like fetching a bottle of mineral water from the refrigerator in the kitchen for human beings.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Bag-of-visual-word-model-based indoor scene cognitive method
  • Bag-of-visual-word-model-based indoor scene cognitive method
  • Bag-of-visual-word-model-based indoor scene cognitive method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The present invention will be further described below in conjunction with the accompanying drawings.

[0027] The invention discloses an indoor scene cognition method based on a visual word bag model. The method of the invention includes two parts: offline map generation and online map query. The offline map generation part includes: scanning the scene to obtain the scene training set; ORB feature detection and description; K-means clustering to extract the centroids to construct the visual dictionary; TF-IDF technology adds weights to generate the training set visual word bag model database. The online map query part includes: receiving scene query instructions; obtaining the RGB image of the current scene and extracting ORB features; querying the visual dictionary of the map database to generate a visual bag-of-words model for the current scene image; comparing the KNN classifier with the training set of the map database and the bag-of-words model of the current scene ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention, which belongs to the mobile robot environment sensing field, especially relates to a bag-of-visual-word-model-based indoor scene cognitive method. The method comprises an off-line part and an on-line part. At the off-line part, scene types are determined based on an application need; a robot uses a carried RGB-D sensor to scan all scenes to obtaining enough scene images to form an image training set; and an ORB 256-dimensional descriptor of each image in the image training set is generated by using an ORB algorithm, wherein each image includes thousands of ORB vectors usually. At the on-line part, the robot receives a current scene type inquiring instruction; and the system is initialized and is prepared for scene query. With the ORB algorithm, the image pretreatment process including feature extraction and matching is completed, so that the algorithm rapidness can be guaranteed; and the scene identification rate is improved by using a KNN classifier algorithm, so that the demand of common indoor scene inquire application of the mobile robot can be satisfied.

Description

technical field [0001] The invention belongs to the field of mobile robot environment perception, and in particular relates to an indoor scene recognition method based on a visual bag-of-words model. Background technique [0002] Usually, grid maps can meet the low-level requirements of robots for navigation and obstacle avoidance tasks. However, for high-level tasks such as human-computer interaction and task planning, it is also necessary to obtain semantic information about scene cognition and create recognition-oriented Semantic map of knowledge. A mobile robot moves in an indoor scene and does not know whether its location belongs to the living room, kitchen or bedroom, so it cannot complete a highly intelligent task like fetching a bottle of mineral water from the refrigerator in the kitchen for humans. Contents of the invention [0003] The object of the present invention is to propose a method for recognizing indoor scenes based on the bag-of-visual-words model. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/23213G06F18/24
Inventor 赵玉新李亚宾刘厂雷宇宁
Owner HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products