Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Transmedia search method based on multi-mode information convergence analysis

A multimedia and multi-modal technology, applied in special data processing applications, instruments, electrical digital data processing, etc., can solve problems such as retrieving audio, unable to retrieve images, and unsatisfactory accuracy, achieving high accuracy, powerful effects

Inactive Publication Date: 2008-05-14
ZHEJIANG UNIV
View PDF1 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there are two weaknesses in the traditional content-based retrieval technology: First, users can only retrieve media objects with the same modality as the query example, that is to say, they can only retrieve images through image examples or audio through audio examples, but not audio example to retrieve images or retrieve audio through image examples; second, there is a semantic gap between the underlying features of media objects and high-level semantics, so the precision rate is not very ideal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Transmedia search method based on multi-mode information convergence analysis
  • Transmedia search method based on multi-mode information convergence analysis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0046] Assume that there are 900 multimedia documents consisting of 900 images, 300 sound clips and 700 texts. First calculate and extract the underlying features of all images, including RGB color histogram, color aggregation vector and Tamura texture features, and then calculate the pairwise distance between all images; for sound clips, extract root mean square, zero-crossing rate, cut-off frequency and The four features of the centroid, and then use the dynamic time stretching (DTW) algorithm to calculate the distance between all sound objects; for text, use TF / IDF vectorization to calculate the distance between two text objects. After completing the calculation of the media object distance, it is necessary to normalize the image distance, text distance and sound distance respectively, and then for any multimedia document A and B, first find the distance between the text, sound and image objects belonging to the two multimedia documents respectively , and then calculate the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a mediate-span search method based on multi-mode information fusion analysis, wherein the invention can fuse and analyze the multi-mode information to understand the multimedia semantic, to realize multimedia document search, image search, sound search and text search based on content; user can via provided search sample at any mode searches the media object or multimedia document at any mode; for example, for searching image, user can provide image as search sample to search, or provide sound or text or their combination as the search sample to search. Since the invention not only uses keyword, also fuses and analyzes all multimedia objects in the multimedia document, to synthesize the information carried by variable mode mediate to understand the mantic to obtain better search effect. Since the search sample and feedback result are in different modes, it has strong function and wide application.

Description

technical field [0001] The invention relates to multimedia retrieval, in particular to a cross-media retrieval method based on multimodal information fusion analysis. Background technique [0002] Multimedia document is a very common file type at present. It consists of multiple media objects of different modes (including audio, image and text, etc.) and has certain semantics, such as multimedia encyclopedia, web pages and slides in Microsoft PowerPoint format. etc. are all multimedia documents. In general, multimedia documents have two characteristics. First, the composition structure is complex, and media objects of various modalities exist in the multimedia document at the same time; second, the media objects of different modalities in the same multimedia document are complementary in semantics, and the semantics of the multimedia document is determined by its internal Commonly expressed by media objects. Therefore, when a certain media object has ambiguity, as a whole...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30
Inventor 潘云鹤庄越挺吴飞杨易
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products