Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Antagonistic cross-media search method based on limited text space

A cross-media, adversarial technology, applied in the field of computer vision, which can solve the problems of loss of image action and interaction information, inappropriate cross-media retrieval, and performance degradation of cross-media retrieval.

Active Publication Date: 2018-07-24
PEKING UNIV SHENZHEN GRADUATE SCHOOL
View PDF8 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, they are also pre-trained on some datasets different from cross-media retrieval, so the extracted features are not suitable for cross-media retrieval
[0004] The second defect is reflected in the choice of isomorphic feature space
Therefore, this feature will also lose the rich action and interactive information contained in the image, which also shows that for cross-media retrieval, the Word2Vec space is not an effective text feature space.
[0005] The third defect is reflected in the difference in the characteristic distribution of different modal data
Although existing methods map data features of different modalities to an isomorphic feature space, there is still a modality gap between them, and there are obvious differences in feature distribution, which will lead to Media retrieval performance degradation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Antagonistic cross-media search method based on limited text space
  • Antagonistic cross-media search method based on limited text space
  • Antagonistic cross-media search method based on limited text space

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] Below in conjunction with accompanying drawing, further describe the present invention through embodiment, but do not limit the scope of the present invention in any way.

[0064] The invention provides an adversarial cross-media retrieval method based on a limited text space, which mainly obtains the limited text space through learning, and realizes the similarity measurement between images and texts. Based on a limited text space, the method extracts image and text features suitable for cross-media retrieval by simulating human cognition, realizes the mapping of image features from image space to text space, and introduces an adversarial training mechanism. It aims to continuously reduce the difference in feature distribution between different modal data during the learning process. The feature extraction network, feature mapping network, modality classifier and their implementation in the present invention, as well as the training steps of the network are described i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an antagonistic cross-media search method based on limited text space. The method comprises the steps that a characteristic extraction network, a characteristic mapping networkand a modality classifier are designed, the limited text space is obtained by learning, image and text characteristics suitable for the cross-media search are extracted, and the mapping of the imagecharacteristic from image space to text space is achieved; the difference among characteristic distributions of different modality data is constantly reduced in the learning process by an antagonistictraining mechanism; and thereby the cross-media search is achieved. The antagonistic cross-media search method based on the limited text space has the advantages that behavioral expressions of peoplein cross-media search tasks can be better fitted; the image and text characteristics more suitable for the cross-media search tasks are obtained, and the defects of pre-training characteristics in anexpressive ability are made up; and the antagonistic learning mechanism is introduced, so that the search accuracy is further improved through the maximum minimum game between the modality classifierand the characteristic mapping network.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to an adversarial cross-media retrieval method based on a restricted text space. Background technique [0002] With the advent of the Web 2.0 era, a large amount of multimedia data (images, texts, videos, audios, etc.) began to accumulate and spread on the Internet. Different from traditional single-modal retrieval tasks, cross-media retrieval is used to achieve bidirectional retrieval between different modal data, such as text retrieval images and image retrieval texts. However, due to the inherently heterogeneous nature of multimedia data, their similarity cannot be directly measured. Therefore, the core problem of this type of task is how to find a homogeneous mapping space, so that the similarity between heterogeneous multimedia data can be directly measured. In the current field of cross-media retrieval, people have done a lot of research on the basis of this problem,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30G06N3/08
CPCG06F16/2462G06F16/285G06F16/5846G06N3/084H04N21/44008G06N3/08G06N3/044G06N3/045
Inventor 王文敏余政王荣刚李革王振宇赵辉高文
Owner PEKING UNIV SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products