Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unsupervised learning scene feature rapid extraction method fusing semantic information

An unsupervised learning and semantic information technology, applied in the field of rapid extraction of unsupervised learning scene features, can solve the problems of insufficient discrimination of complex scenes, scene matching effect interference with binary feature descriptors, etc.

Active Publication Date: 2020-06-05
北京格镭信息科技有限公司
View PDF6 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to solve the problem that the unstable information in the image seriously interferes with the scene matching effect and the problem that the binarized feature descriptor has insufficient discrimination for complex scenes, the present invention provides a fast extraction method for unsupervised learning scene features that fuses semantic information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised learning scene feature rapid extraction method fusing semantic information
  • Unsupervised learning scene feature rapid extraction method fusing semantic information
  • Unsupervised learning scene feature rapid extraction method fusing semantic information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0035] In order to achieve high-precision, high-robust image global-local feature extraction, while improving the efficiency of scene matching. The present invention considers the guiding role of semantic features on the extraction of salient regions in the scene and the advantages of high computational efficiency of binarized feature descriptors, and discloses a fast extraction method of unsupervised learning scene features fused with semantic information. The process is as follows figure 1 As shown, follow the steps below:

[0036] Step 1: Scene salient region extraction

[0037] Firstly, the video frame is preprocessed to remove the blurred and distorted areas. The video frame is then sampled row-by-row using a sliding window to compute the saliency score S(p(x,y,f) for each pixel in the image t )).

[0038]

[0039] like figure 2...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unsupervised learning scene feature rapid extraction method fusing semantic information, which belongs to the technical field of image processing. The method mainly solves the technical problem of image feature description in the scene recognition problem. In view of the problem of serious interference of unstable information in an image on a scene matching effect and the problem of poor robustness of binarization feature descriptors to severe environmental changes, accurate scene semantic features are extracted through a semantic segmentation model obtained througha weighted model fusion strategy to guide the detection of a key region containing specific information; on the basis of the region, a screening strategy based on pixel point position clues and an unsupervised learning algorithm are respectively adopted to extract a binarization feature descriptor with high discrimination capability, so that the scene matching precision can be improved while the calculation complexity is reduced.

Description

technical field [0001] The invention relates to the technical field of image processing, and relates to a fast extraction method of unsupervised learning scene features fused with semantic information. Background technique [0002] Scene feature extraction is often used to extract specific information in the scene so as to retrieve scenes with consistent content from the scene database. It has a wide range of applications in image retrieval, visual positioning, closed-loop detection and other fields. [0003] In the face of complex and changeable scenes, how to quickly extract stable features from them is a key technology in visual positioning tasks. Manually extracted features are widely used in visual localization systems, and can be divided into two categories according to the size of the feature description area: local features and global features. Methods based on local features, such as SIFT, SURF, ORB, describe the image by extracting feature points. This method only...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/49G06V20/41G06V20/46G06F18/23213
Inventor 贾克斌王婷娴孙中华
Owner 北京格镭信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products