Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Cross-media retrieval method based on deep learning and consistent expression spatial learning

A deep learning and consistent technology, applied in the field of cross-media retrieval, can solve the problems of not being able to measure the similarity of modal features more accurately, not being able to express the global content well, and not considering the dimension of feature vector indicators.

Active Publication Date: 2016-11-09
HUAQIAO UNIVERSITY
View PDF3 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The SIFT local feature used in this paper can be effectively used for object retrieval, but it cannot express the rich global content of the image well. The standard Pearson correlation algorithm used does not consider the directionality of the feature vector and the different features themselves. The difference in the dimension of the index cannot measure the similarity of the two modal features more accurately

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-media retrieval method based on deep learning and consistent expression spatial learning
  • Cross-media retrieval method based on deep learning and consistent expression spatial learning
  • Cross-media retrieval method based on deep learning and consistent expression spatial learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0039] In order to solve the shortcomings of the existing technology, the present invention provides a cross-media retrieval method based on deep learning and consistent expression space learning. The method aims at mutual retrieval of multimedia information in two modes of image and text, and realizes cross-media retrieval. Retrieval accuracy is greatly improved.

[0040] Method of the present invention, main steps are as follows:

[0041] 1) After acquiring the image data and text data, extract the image feature I and text feature T respectively to obtain the image feature space and text feature space

[0042] 2) The image feature space Mapped to a new image feature space U I , the text feature space Mapped to a new text feature space U T , the new image feature space U I with the new text feature space U T is isomorphic;

[00...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a cross-media retrieval method based on deep learning and consistent expression spatial learning. By starting with two methods including feature selection and the similarity estimation of two highly-isomerous feature spaces, the invention puts forward the cross-media retrieval method capable of improving multimedia retrieval accuracy to a large extent by aiming at the cross-media information of two modalities including an images and a text. The method disclosed by the invention is a multimedia information mutual retrieval method which aims at two modalities including the image and the text, and cross-media retrieval accuracy is improved to a large extent. In a model which is put forward by the invention, a regulated vector inner product is adopted as a similarity metric algorithm, the directions of the feature vectors of two different modalities are considered, the influence of an index dimension is eliminated after centralization is carried out, an average value of elements is subtracted from each element in the vectors, the correlation of the two vectors subjected to average value removal is calculated, and accurate similarity can be obtained through calculation.

Description

technical field [0001] The present invention relates to cross-media retrieval technology, more specifically, to a cross-media retrieval method based on deep learning and consistent expression space learning. Background technique [0002] The research object of cross-media retrieval is: how to use computer to carry out cross-media information retrieval, that is: search for text information associated with the input image or search for images associated with the input text. [0003] The application fields of cross-media retrieval system include information retrieval, map recognition, image labeling, etc. With the rapid development of the Internet today, various network platforms, including news websites, microblogs, social networks, image and video sharing websites, etc., are increasingly changing people's perception of knowledge acquisition and social relations, and multimedia data is also constantly changing. Rapid growth, and various types of cross-media information combin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30
CPCG06F16/43
Inventor 杜吉祥邹辉翟传敏范文涛王靖刘海建
Owner HUAQIAO UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products