Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image retrieval method based on feature fusion

A technology of image retrieval and feature fusion, applied in still image data retrieval, image coding, image data processing, etc., can solve the problem of lack of local information in image representation, achieve the effect of reducing storage cost, saving storage cost, and strong discrimination

Pending Publication Date: 2021-01-01
HUAZHONG UNIV OF SCI & TECH
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, when only fully connected layer features are used, the resulting image representation lacks local information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image retrieval method based on feature fusion
  • Image retrieval method based on feature fusion
  • Image retrieval method based on feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0058] An image retrieval method based on feature fusion, such as figure 1 shown, including:

[0059] Model training step: establish a convolutional neural network for extracting image features, and use the training image set to train it to obtain a feature extraction network;

[0060] Multi-layer semantic floating-point descriptor construction step: extract at least one high-level semantic feature and at least one low-level image feature of the image, and fuse the extracted high-level semantic feature and low-level image feature to obtain the multi-layer semantic floating point of the image Descriptor; figure 1 and figure 2 As shown, in this embodiment, the high-level semantic features include a global descriptor (Global), an object descriptor (Object) and a salient region descriptor (Salient), and the underlying image features include a SIFT descriptor, wherein the global descriptor describes The global feature of the image, the object descriptor describes the object inf...

Embodiment 2

[0128] An image retrieval method based on feature fusion, this embodiment is similar to the above-mentioned embodiment 1, the difference is that in this embodiment, the high-level semantic features include global descriptors and object descriptors, and the underlying image features include SIFT descriptors .

[0129] For the specific implementation of this embodiment, reference may be made to the description in Embodiment 1 above, which will not be repeated here.

Embodiment 3

[0131] An image retrieval method based on feature fusion, this embodiment is similar to the above-mentioned embodiment 1, the difference is that in this embodiment, high-level semantic features include global descriptors, and low-level image features include SIFT descriptors.

[0132] For the specific implementation of this embodiment, reference may be made to the description in Embodiment 1 above, which will not be repeated here.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image retrieval method based on feature fusion, which belongs to the field of image retrieval and comprises the following steps of: training a feature extraction network; extracting a multi-layer semantic floating point descriptor of each image in a training image set, and performing hash learning to generate a rotation matrix R; extracting a multi-layer semantic floating point descriptor of each image in an image library, and performing binaryzation after rotation by utilizing the R; classifying the images in the image library by using a classification network; correspondingly storing a binary descriptor and a class probability vector of each image, wherein the extraction of the multi-layer semantic floating point descriptor is implemented by extracting high-layer semantic features and bottom-layer image features of each image and fusing the features, the high-layer semantic features comprise global descriptors, and are extracted in a mode of zooming each image to a plurality of different scales, extracting features by using the feature extraction network and fusing the features, the bottom-layer image features comprise SIFT descriptors, and are extracted in a mode of extracting a plurality of SIFT features of each image and aggregating the SIFT features into VALD. According to the image retrieval method, the descriptor with high distinguishing capability and small occupied space can be constructed.

Description

technical field [0001] The invention belongs to the field of image retrieval, and more specifically relates to an image retrieval method based on feature fusion. Background technique [0002] The content-based image retrieval method extracts the visual features of the image to describe the image, which can describe the image more accurately and comprehensively than text. Scale Invariant Feature Transform (SIFT) was proposed by David in 1999 for image matching in the field of computer vision. SIFT not only has strong invariance to scale, translation and rotation, but also has good robustness to illumination changes, occlusion and noise. RootSIFT is an improved version of SIFT. RootSIFT performs 1-norm standardization and square root transformation on the basis of SIFT, thereby improving the descriptive power of SIFT. The vector of locally aggregated descriptors (Vector of Locally Aggregated Descriptors, VLAD) is used to encode the feature set into a fixed-length vector. VLA...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/583G06F16/55G06F16/538G06K9/46G06K9/62G06N3/04G06N3/08G06T9/00
CPCG06F16/583G06F16/55G06F16/538G06N3/08G06T9/00G06V10/464G06N3/045G06F18/2136G06F18/253
Inventor 于俊清吴泽斌何云峰
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products