Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature extraction and matching method and feature extraction and matching system in visual navigation

A feature extraction and matching method technology, which is applied in image analysis, image enhancement, instruments, etc., can solve the problems of high noise and low matching accuracy, and achieve the effects of improving matching speed, eliminating affine transformation, and reducing data processing

Active Publication Date: 2014-11-05
BEIJING GUODIAN FUTONG SCI & TECH DEV
View PDF5 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] For this reason, the technical problem to be solved by the present invention is that the existing feature extraction or matching method has large noise and low matching accuracy after segmentation for images with similar textures and backgrounds. feature extraction and matching method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature extraction and matching method and feature extraction and matching system in visual navigation
  • Feature extraction and matching method and feature extraction and matching system in visual navigation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] A feature extraction and matching method in visual navigation is provided in this embodiment, including the following steps:

[0052] (1) Using the two-dimensional maximum entropy threshold segmentation method to extract the feature region. The two-dimensional maximum entropy threshold segmentation method is a conventional method of feature area extraction. The two-dimensional maximum entropy threshold segmentation method is used to extract small feature areas that are prominent in the subject: calculate the entropy value of each point in the picture, and determine the segmentation threshold so that the picture Entropy reaches a maximum. In this process, in order to improve the anti-noise performance of the two-dimensional maximum upper threshold segmentation method, the pixel points will be denoised. Take each pixel and its adjacent pixels as a neighborhood; calculate the pixel mean value in the neighborhood, form a pixel-mean pair, and establish a two-dimensional fun...

Embodiment 2

[0064] This embodiment provides a feature extraction and matching method in visual navigation. On the basis of the above embodiments, after the feature region is extracted using the two-dimensional maximum entropy threshold segmentation method, it also includes filtering out the included pixels. Steps for feature regions smaller than a preset threshold. Due to the fine texture of the image, the feature area obtained by using the two-dimensional maximum entropy threshold segmentation method will have a feature area composed of only a few pixels. These feature areas are not obvious and easy to cause matching errors. Therefore, a larger area is selected as In the area for the next step of processing, removing the feature area of ​​small pixels can reduce matching errors, improve the matching speed, and reduce the amount of data processing.

[0065] In this embodiment, a feature extraction and matching method in visual navigation is provided, and the specific design of the main li...

Embodiment 3

[0083] This embodiment also provides a feature extraction and matching system in visual navigation using the above method, including the following parts:

[0084] Feature area extraction unit: use the two-dimensional maximum entropy threshold segmentation method to extract the feature area;

[0085] Normalization processing unit: perform image normalization processing on the extracted feature region;

[0086] Matching unit: use the SIFT algorithm to obtain the feature vector of the feature point, match each feature point in each feature area in the first image with the feature point in each feature area in the second image, and obtain the matching point The number of matching points, select the two feature areas with the largest number of matching points as the matching area, and the matching feature points as the matching feature points.

[0087] Wherein, the feature region extracting unit further includes a filtering subunit to filter out feature regions containing pixels s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a feature extraction and matching method in visual navigation. According to the method, a feature region is extracted by adopting a two-dimensional maximum entropy threshold segmentation method; an external square region of the feature region is subjected to image normalization processing; each external square region subjected to the image normalization processing is used as a feature region to be matched; a feature vector of a feature point is obtained; each feature point in each feature region in a first image is matched with feature points in each feature region in a second image, and the number of matching points is obtained; and two feature regions with the most matching points are selected as matching regions, wherein matched feature points are used as matching feature points. According to the scheme, the number of the matching points is obtained through feature point matching; the feature regions with the most matching points are obtained by using the number of the matching points as a screening condition; the matching of the feature regions is completed; the practical matching pair is further obtained; and the matching points of navigation images with fine, dense and single textures under the condition of generating affine transformation such as shear transformation is increased.

Description

technical field [0001] The invention relates to an image analysis and processing method, in particular to a feature extraction and matching method in visual navigation. Background technique [0002] Navigation means that the moving body makes a global path plan based on the known map information according to the predetermined task command, and constantly perceives the surrounding local environmental information, makes various decisions, and adjusts its posture at any time and position, guide itself to drive safely to the target position. The inertial navigation method is the most common navigation method, but due to the inherent precision limitation of the device itself and the influence of the external environment, there will be certain errors, which will accumulate during the navigation process and affect the final result, especially for pipelines that are slippery or contain materials inside. It is very easy to cause a position estimation error due to slipping, so that t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06T5/40
Inventor 刘振强张玉李新生范力遥董启强钟根元丁飞王峰张培林苑晓微蒋丽杨志杜岩
Owner BEIJING GUODIAN FUTONG SCI & TECH DEV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products