Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A cross-modal hash learning method based on anchor graph

A learning method and cross-modal technology, applied in other database retrieval based on metadata, character and pattern recognition, instruments, etc., can solve the problem of keeping beneficial information of feature data and not completely solving it

Active Publication Date: 2022-07-08
JIUJIANG UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide a cross-modal hash learning method based on anchor graphs, which solves the problem that existing cross-modal hash learning methods have not completely solved the problem of maintaining features based on graph structures on large-scale data sets. The problem of beneficial information in the data, and the problem of discriminative feature selection involved in mapping raw feature data from high-dimensional feature space to low-dimensional Hamming space, propose a cross-modal hashing learning method based on anchor graph and applied to cross-modal retrieval tasks involving image modality and text modality

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A cross-modal hash learning method based on anchor graph
  • A cross-modal hash learning method based on anchor graph
  • A cross-modal hash learning method based on anchor graph

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] A cross-modal hash learning method based on anchor graph, establishing the characteristics of n objects in image modality and text modality, respectively: and in, and Represent the feature vector of the i-th object in image mode and text mode, i=1,2,...,n,d 1 and d 2 Represent the dimensions of the feature vectors of the image modality and text modality respectively; at the same time, it is assumed that the feature vectors of the image modality and the text modality are all preprocessed by zero-centering, that is, it satisfies Assumption and are the adjacency matrices of image modality and text modality samples, respectively; matrix A (1) elements in and matrix A (2) elements in represent the similarity between the ith sample and the jth sample in the image modality and text modality, respectively; suppose S ∈ {0,1} n×n is the semantic correlation matrix between samples in the two modalities, where S ij Represents the semantic correlation between the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A cross-modal hash learning method based on anchor graph, which is characterized in that the method includes the following steps: (1) Using an objective function designed based on anchor graph technology to obtain the binary hash of an object in image modality and text modality (2) In view of the non-convex nature of the objective function, the unknown variables , , and in the objective function are solved by alternately updating; (3) Based on the solution, we get The projection matrix sum of the image modalities and text modalities, generate binary hash codes for the query samples and samples in the retrieval sample set; (4) Calculate the Hamming of the query samples to each sample in the retrieval sample set based on the generated binary hash codes distance; (5) The retrieval of query samples is done using a cross-modal retriever based on approximate nearest neighbor search. This method can quickly obtain the approximate matrix of the real similarity matrix based on the anchor graph technology.

Description

technical field [0001] The invention relates to a cross-modal hash learning method based on anchor graph. Background technique [0002] With the rapid development of information technology, human society has entered the era of big data, and massive amounts of data from different fields and applications are generated all the time. Faced with the explosive growth of data, how to quickly retrieve the required information from it, so as to ensure the effective use of the data, has become an urgent and very challenging problem to be solved in the era of big data. [0003] Nearest neighbor search, also known as similarity search, plays an important role in many applications such as document retrieval, object recognition, and approximate image detection. Among many methods for approximate nearest neighbor search, hash-based search (retrieval) methods have received more and more attention in recent years. The hash-based search method can map high-dimensional feature data into a co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/907G06K9/62
CPCG06F18/213G06F18/22
Inventor 董西伟邓安远胡芳贾海英周军孙丽杨茂保王海霞
Owner JIUJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products