Multi-modality vocabulary representing method based on dynamic fusing mechanism

A vocabulary representation and multi-modal technology, applied in special data processing applications, instruments, unstructured text data retrieval, etc., can solve problems such as inaccurate vocabulary weights, lack of consideration of vocabulary differences, inaccurate representation results, etc.

Active Publication Date: 2017-12-15
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF3 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The multimodal vocabulary representation method in the prior art does not take into account the differences between vocabulary. In practical applications, the semantic representation of more abstract vocabulary depends more on the text modality, and the semantic representation of more concrete vocabulary relies more on the visual modality. , different types of words have different weights in different modalities, and not distinguishing words will lead to inaccurate weights of words in modals, resulting in inaccurate final representation results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modality vocabulary representing method based on dynamic fusing mechanism
  • Multi-modality vocabulary representing method based on dynamic fusing mechanism
  • Multi-modality vocabulary representing method based on dynamic fusing mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principles of the present invention and are not intended to limit the protection scope of the present invention.

[0041] Such as figure 1 As shown, figure 1 The flow chart of a multi-modal vocabulary representation method based on a dynamic fusion mechanism provided by the present invention includes steps 1, 2 and 3, wherein,

[0042] Step 1: Calculate the text representation vector of the vocabulary to be represented in the text mode and the picture representation vector of the vocabulary to be represented in the visual mode respectively;

[0043] Calculating the text representation vector and picture representation vector of the vocabulary is to transform the vocabulary into a form that can be recognized by the computer. In practical applications, the calculat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a multi-modality vocabulary representing method. The multi-modality vocabulary representing method comprises the steps of calculating a text representing vector of a vocabulary to be represented in text modality and a picture representing vector of the vocabulary to be represented in visual modality; inputting the text representing vector into a pre-established text modality weight model to obtain the weight of the text representing vector in the text modality; inputting the picture representing vector into a pre-established visual modality weight model to obtain the weight of the picture representing vector in picture modality; conducting calculation to obtain a multi-modality vocabulary representing vector according to the text representing vector, the picture representing vector and weights corresponding to the text representing vector and the picture representing vector respectively. The text modality weight model is a neural network model of which input is the text representing vector and output is the weight of the text representing vector in the corresponding text modality; the visual modality weight model is a neural network model of which input is the picture representing vector and output is the weight of the picture representing vector in the corresponding visual modality.

Description

Technical field [0001] The invention belongs to the technical field of natural language processing, and specifically provides a multimodal vocabulary representation method based on a dynamic fusion mechanism. Background technique [0002] Multimodal vocabulary representation is the basic task of natural language processing, which directly affects the performance of the entire natural language processing system. Among them, modal refers to collecting data through different methods or angles for a thing to be described, and the method or angle of collecting data is called a modal. Multimodal vocabulary representation is the fusion of information from multiple modalities, mapping the semantically similar words in different modalities into a high-dimensional space. Compared with single-modal vocabulary representation, multimodal vocabulary representation can be closer to human learning The process of vocabulary conception has better performance in natural language processing tasks. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30G06F17/27
CPCG06F16/36G06F40/20
Inventor 王少楠张家俊宗成庆
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products