Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A multi-attention mechanism video description method based on spatio-temporal and channel

A technology of video description and attention, which is applied in the field of optical communication, can solve the problems of weakened influence, no use of video features, and reduced ability to generate model sentences, etc., and achieve the effects of simplifying processing steps, improving effects, and improving efficiency

Active Publication Date: 2021-06-04
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The first problem is that there is no effective use of video features
In the paper, the video features are only used in the first decoding, and the video features are not used in the subsequent moments, which leads to the weakening of the impact of video features on word prediction when the time sequence increases, which will reduce the ability of the model to generate sentences
[0004] A direct solution to this problem is to add video features every time, but since the video features are continuous multiple images, if the mean pooling method is still used to send the decoding model every moment, obviously this is still not effective Take advantage of video features
[0006] The second problem is the consistency of visual content features and sentence descriptions
The first problem is that although the method based on temporal attention improves the utilization of video features, but in a deeper way, this method still does not fully model the relationship between video features and sentence descriptions, which leads to The second problem that comes is how to ensure the consistency of visual content feature sentence description

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-attention mechanism video description method based on spatio-temporal and channel
  • A multi-attention mechanism video description method based on spatio-temporal and channel
  • A multi-attention mechanism video description method based on spatio-temporal and channel

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0059] figure 1 It is a principle diagram of the multi-attention mechanism video description method based on space-time and channels of the present invention.

[0060] In this example, if figure 1 As shown in the present invention, a multi-attention mechanism video description method based on space-time and channel can extract powerful and effective visual features from the time domain, space domain and channel respectively, so as to make the representation ability of the model stronger. It is introduced in detail, specifically including the following steps:

[0061] S1. Randomly extract M videos from the video library, and then simultaneously input M videos to the neural network CNN;

[0062] S2. Training neural network LSTM based on attention mechanism

[0063] Set the maximum number of training times to H, and the maximum number of iterations in each round of training to be T; the word vector of the word at the initial moment is w 0 , h 0 Initialize to 0 vector;

[00...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-attention mechanism video description method based on space-time and channels, which extracts video features from the video through a CNN network, and then calculates the video features and the output of the encoding at the previous moment based on the multi-attention network, thereby obtaining The attention weights of video features in the time domain, space domain, and channel, and then calculate the three sets of weights with the video features again to obtain the fused features, so that we can get more effective video features, and finally encode the fused feature lines output, resulting in a description that is more consistent with the video content.

Description

technical field [0001] The invention belongs to the technical field of optical communication, and more specifically relates to a multi-attention mechanism video description method based on time, space and channels. Background technique [0002] Video description is a research in two fields of computer vision and natural language processing, which has received great attention in recent years. Venugopalan released a video description model based on the "encoding-decoding" framework in 2014. The encoding model in the paper first uses CNN to extract features for a single video frame, and then adopts two encoding models of mean pooling and time sequence encoding. Although the model has been successfully applied to video description, there are still some problems in the video description model: [0003] The first problem is that video features are not effectively utilized. In the paper, the video features are only used in the first decoding, and the video features are not used ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V20/46G06N3/045
Inventor 徐杰李林科田野王菡苑
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products