Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal retrieval method and system based on dual coding and combination and storage medium

A cross-modal and coding technology, applied in the field of video processing, can solve the problems of inability to use complementary information, limit the robustness of the retrieval system, and the accuracy of retrieval results is not high enough, and achieve the effect of improving accuracy and reducing semantic differences.

Active Publication Date: 2020-05-22
SOUTH CHINA NORMAL UNIVERSITY
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this type of video ignores the characteristics of the video and cannot use the complementary information contained in the video, such as spatio-temporal information and sound information, which limits the robustness of the retrieval system, and the accuracy of the retrieval results is not high enough to meet the actual needs.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal retrieval method and system based on dual coding and combination and storage medium
  • Cross-modal retrieval method and system based on dual coding and combination and storage medium
  • Cross-modal retrieval method and system based on dual coding and combination and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] The following describes in detail the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, only used to explain the present invention, and should not be construed as a limitation of the present invention. The numbers of the steps in the following embodiments are only set for the convenience of description, and the sequence between the steps is not limited in any way, and the execution sequence of each step in the embodiments can be adapted according to the understanding of those skilled in the art Sexual adjustment.

[0048] First, a cross-modal retrieval method based on dual coding and association proposed according to an embodiment of the present invention will be described with referenc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-modal retrieval method, system and device based on dual coding and combination. The method is a cross-modal retrieval algorithm based on dual coding and dual joint embedded learning. The method comprises the steps of extracting and encoding various features of a video through a neural network, performing multi-layer encoding on character features, learning and training two network models combined with video text embedding, and obtaining a text-to-video retrieval result or a video-to-text retrieval result through the two models. By using the method provided by the invention, the semantic difference between the video features and the text described by the natural language can be reduced, the potential information and the relationship between the video and thetext are captured, learned and optimized in a targeted and complementary manner, and finally, the retrieval accuracy between the video and the text is improved. The method can be widely applied to thetechnical field of video processing.

Description

technical field [0001] The present invention relates to the technical field of video processing, in particular to a cross-modal retrieval method, system, device and storage medium based on dual coding and association. Background technique [0002] Modality: Refers to a source or form of data, such as text, audio, image, video, etc. [0003] Cross-modality: Some data exist in different forms, but all describe the same thing or event. [0004] Cross-modal retrieval: Given a retrieval modal instance, retrieve another modal instance that is semantically similar or consistent with its instance. [0005] With the development of the Internet and information technology, there are more and more types of data. Common multimedia data include text data, image data, video data and audio data. The rapid growth of video on the Internet makes searching for video content using natural language queries a significant challenge. Compared with simple images, video is composed of consecutive m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/732G06F16/783
CPCG06F16/732G06F16/7328G06F16/7844G06F16/783
Inventor 肖菁崔晓桃
Owner SOUTH CHINA NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products