Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal joint event detection method based on pictures and sentences

A technology that combines events and detection methods, used in character and pattern recognition, still image data retrieval, computer components, etc., to achieve the effect of improving performance

Active Publication Date: 2021-10-22
HANGZHOU DIANZI UNIV
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, the task of event extraction is more challenging

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal joint event detection method based on pictures and sentences
  • Multi-modal joint event detection method based on pictures and sentences
  • Multi-modal joint event detection method based on pictures and sentences

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] The accompanying drawings disclose non-restrictive schematic flow diagrams of preferred examples involved in the present invention; the technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0071] Event detection is an important part of the event extraction task, which can identify image actions and text trigger words that mark the occurrence of events and classify them into predefined event types. It has a wide range of applications in the fields of network public opinion analysis and intelligence collection. As the carriers of disseminating network information become more and more diverse, researchers have begun to focus on event detection tasks in different fields, that is, how to automatically obtain events of interest from different information carriers such as unstructured pictures and texts. Also, the same event may appear in pictures and sentences in different forms. However, the existing...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-mode joint event detection method based on pictures and sentences, by which the events are identified from the pictures and the sentences at the same time. According to the method, on one hand, an existing single-mode data set is used for respectively learning picture and text event classifiers; on the other hand, an existing picture and title pair training picture sentence matching module is used for finding out the picture and the sentence with the highest semantic similarity in the multi-modal article, and therefore feature representation of picture entities and words in the public space is obtained. The features facilitate sharing of parameters between the picture and text event classifiers, resulting in a shared event classifier. Finally, the model is tested by using a small amount of multi-modal labeling data, thus respectively obtaining events described by pictures and sentences and types thereof by using a shared event classifier. According to the method, the events are identified from pictures and sentences, and complementarity of visual features and text features is utilized, so that the performance of single-mode event classification is improved, and more complete event information in an article can be found.

Description

technical field [0001] The invention designs an event detection method, specifically a method for multi-modal joint event detection based on pictures and sentences, and belongs to the field of multi-modal information extraction. Background technique [0002] As modern technologies such as computers and mobile phones gradually enter the homes of ordinary people, activities such as participating in social platform interactions and browsing news websites have become the main ways for people to obtain network information, which also greatly simplifies the process for netizens to obtain information. Followed by the continuous increase of Internet users who consume information, according to the 47th "Statistical Report on the Development of Internet in China" released by China Internet Network Information Center 1 It shows that as of December 2020, the number of Internet users in China has reached 989 million, an increase of 85.4 million compared to March last year. Therefore, a ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/35G06F16/55G06K9/62
CPCG06F16/35G06F16/55G06F18/285
Inventor 张旻曹祥彪汤景凡姜明
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products