A cross-modal image-text retrieval method based on category information alignment

A category information and text technology, applied in the field of image-text cross-modal retrieval based on category information alignment, can solve the problem of insufficient retrieval accuracy of cross-modal retrieval methods

Active Publication Date: 2022-03-25
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But the retrieval accuracy of this cross-modal retrieval method is not high enough

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A cross-modal image-text retrieval method based on category information alignment
  • A cross-modal image-text retrieval method based on category information alignment
  • A cross-modal image-text retrieval method based on category information alignment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0065] Specific embodiments of the present invention will be described below in conjunction with the accompanying drawings, so that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.

[0066] In the cross-modal retrieval based on deep learning, the most commonly used cross-modal retrieval is image and text. In the present invention, the image I, the corresponding text T, and the category information C are stored as an image-text pair instance in the training data set, so that N image-text pair instances constitute the training data set. The corresponding real image features (referred to as true image features) and real text features (referred to as true text features) can be expressed as In this embodiment, the true image features In order to utilize ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image text cross-modal retrieval method based on category information alignment, which aims to maintain the distinction between instances (image text) of different semantic categories and eliminate heterogeneity differences at the same time. To achieve this goal, the present invention innovatively introduces category information in the common representation space, i.e., the image-text common space, to minimize the discrimination loss, and introduces a cross-modal loss to align different modality information. In addition, the present invention also adopts the method of category information embedding to generate fake features instead of labeling information like other DNN-based methods, meanwhile, the present invention minimizes the modality invariance loss in the category common space to learn modality invariance feature. Under the guidance of this learning strategy, the present invention makes full use of the pairwise similarity semantic information of the image-text coupling item to ensure that the learned representation has both the distinction of semantic structure and the invariance across modalities .

Description

technical field [0001] The invention belongs to the technical field of image text cross-modal retrieval, and more specifically relates to an image text cross-modal retrieval method based on category information alignment. Background technique [0002] Cross-modal retrieval refers to the process of mutual retrieval of data of different modalities. The existing mainstream methods of cross-modal retrieval are divided into three types. [0003] The first is a cross-modal retrieval method based on basic subspace learning, which mainly learns projection matrices from paired datasets with the same semantic information, projects features of different modalities into a common latent subspace, and then Measures the similarity of different modalities in space. Such as canonical correlation analysis-based methods and kernel-based methods, learn linear projections or choose appropriate kernel functions to generate common representations by maximizing the pairwise correlations between t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/432G06F16/48G06N3/04G06N3/08G06T9/00
CPCG06F16/434G06F16/48G06T9/002G06N3/08G06N3/045
Inventor 杨阳王威扬何仕远
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products