Visual question and answer method based on multi-modal depth feature fusion and model thereof

A deep feature, multi-modal technology, applied in the field of visual question answering, can solve the problems of inability to interact closely with cross-modal features and easy loss of key feature information, so as to improve the prediction accuracy and performance.

Active Publication Date: 2022-04-26
SOUTHWEST JIAOTONG UNIV
View PDF13 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to solve the problems that the current visual question answering model is prone to loss of key feature information and cross-modal features cannot interact closely, the present invention discloses a visual question answering method based on multi-modal deep feature fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual question and answer method based on multi-modal depth feature fusion and model thereof
  • Visual question and answer method based on multi-modal depth feature fusion and model thereof
  • Visual question and answer method based on multi-modal depth feature fusion and model thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0090] The present invention will be clearly and completely described below in conjunction with the accompanying drawings. Those skilled in the art will be able to implement the present invention based on these descriptions. Before the present invention is described in conjunction with the accompanying drawings, it should be pointed out that:

[0091] The technical solutions and technical features provided in each part of the present invention, including the following description, can be combined with each other under the condition of no conflict.

[0092] In addition, the embodiments of the present invention referred to in the following description are generally only some embodiments of the present invention, not all of them. Therefore, based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present invention.

[0093] The term "MLP...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual question-answering method based on multi-modal deep feature fusion, and the method comprises the following steps: (1), obtaining two modal data features of an image and a text through employing a convolutional neural network and a long-short term memory network, then, attention modeling in the modals and between the modals is carried out by utilizing two obtained modality data characteristics of the image and the text; (2) constructing an attention network and stacking attention layers in series, wherein two modal features serve as reference of attention weight learning to perform deeper feature interaction; and (3) fusing the image information and the text semantics after attention weighting through a multi-modal fusion function, and transmitting fusion features into a classifier to combine with answer text data to predict a result. In addition, the invention also discloses a visual question and answer model based on multi-modal depth feature fusion. Compared with an existing method, the method has the advantages of being good in stability, higher in prediction accuracy, lower in experimental hardware environment requirement and the like.

Description

technical field [0001] The present invention relates to the field of visual question answering related to multimodal data fusion research, in particular to a visual question answering method and its model based on multimodal deep feature fusion, Background technique [0002] Visual question answering refers to: Given a picture and a question related to the picture, the goal of visual question answering is to combine the visual information and text content of the picture, and obtain the answer to the question by performing deep feature fusion processing on the image and text. [0003] The cross-modal interaction method adopted by the early research of visual question answering is based on simple feature combination. For example, the problem features represented by the bag-of-words model are directly concatenated and integrated with the convolutional features of the image, and input into the logistic regression classifier; another example is the combination of graphic and text...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06F40/284G06V10/80G06V10/774
CPCG06F40/284G06N3/044G06N3/045G06F18/253G06F18/214Y02D10/00
Inventor 杜圣东邹芸竹李天瑞张凡张晓博赵小乐
Owner SOUTHWEST JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products