Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Indoor scene semantic annotating method based on super-pixel set

A technology for indoor scene and semantic annotation, applied in the direction of instruments, character and pattern recognition, computer components, etc., can solve the problem of not being able to fully describe objects

Active Publication Date: 2018-04-20
BEIJING UNIV OF TECH
View PDF1 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The above methods are all based on superpixel or pixel features for indoor scene semantic annotation, and the space where superpixels and pixels are located is quite different from the space occupied by objects to be labeled, such as figure 1 Shown: A superpixel is only a small part of the image area where the sofa is located, so the superpixel or pixel features cannot completely describe the characteristics of the object

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor scene semantic annotating method based on super-pixel set
  • Indoor scene semantic annotating method based on super-pixel set
  • Indoor scene semantic annotating method based on super-pixel set

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0088] The present invention uses the NYU V1 dataset collected and sorted out by Silberman and Fergus et al. to carry out experiments. This dataset has 13 semantic categories (Bed, Blind, Bookshelf, Cabinet, Ceiling, Floor, Picture, Sofa, Table, TV, Wall, Window, Background) and 7 scenes. The entire dataset contains 2284 frames of color images (RGB) and 2284 frames of depth images (Depth), with one-to-one correspondence between them, and each image is a standard image of 480×640 size. According to the traditional division method, the present invention selects 60% of the data set for training and 40% for testing.

[0089] Based on the NYU V1 data set, a comparative experiment was carried out between the method proposed by the present invention and the methods proposed by Silberman, Ren, Salman H.Khan, Anran, Heng, etc., and the experimental results are shown in the class average accuracy and figure 2 It can be seen that the method proposed by the present invention has achieve...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An indoor scene semantic annotating method based on the super-pixel set belongs to the technical field of multimedia technology and computer graphics, and overcomes the limitation of small-size spacefor semantic feature extraction in indoor scene semantic annotating method based on the super-pixel characteristics or pixel characteristics. The method comprises the steps of firstly, calculating thesuper-pixel characteristics, modeling super-pixel set characteristics based on the super-pixel characteristics by utilizing a gaussian mixture model, mapping the super-pixel set characteristics to aHilbert space, and finally reducing the dimension to an euclidean space to obtain the characteristic representation of the super-pixel set. Compared with the prior art, the method aims at feature extraction of the space (super-pixel set) which is basically equal to an object so that the object can be more accurately represented to achieve the goal of improving the semantic annotation accuracy of the indoor scene.

Description

technical field [0001] The invention belongs to the technical fields of multimedia technology and computer graphics, and in particular relates to an indoor scene semantic labeling method. Background technique [0002] Semantic annotation of indoor scenes, as a necessary task in computer vision research, has always been a hot topic in related fields. Due to the large number of semantic categories in indoor scenes, mutual occlusion between objects, weak recognition of low-level visual features, and uneven lighting, indoor scene semantic annotation has become a thorny and challenging research direction in image understanding. Indoor scene semantic annotation is the core issue of indoor scene understanding. Its basic goal is to densely provide a predefined semantic category label for each pixel in a given indoor scene image or a frame of a video shot in an indoor scene. It has great application value in many fields such as indoor intelligent service robots and anti-terrorism EO...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/34
CPCG06V20/36G06V10/26
Inventor 王立春段学浩孔德慧王玉萍尹宝才
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products