Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video abstraction method based on attention expansion coding and decoding network

A technology of video summarization and attention, applied in the field of video summarization, can solve the problems of outlier processing without a clear solution, failure to make full use of video semantic information, and failure to comprehensively consider the global constraints of generating summaries. time of resources, enhanced robustness, improved search experience

Pending Publication Date: 2019-08-09
TIANJIN UNIV
View PDF7 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The above methods only focus on the local correspondence between the generated summaries and the real annotations, but do not comprehensively consider the global constraints on the generated summaries, and fail to make full use of the semantic information of the video
And there is no clear solution to the outlier processing of the parameter update process, which will also affect the final summary performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video abstraction method based on attention expansion coding and decoding network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The video summarization method based on the attention extension codec network of the present invention will be described in detail below with reference to the embodiments and the accompanying drawings.

[0027] In order to enhance the ability of generating summaries to retain the important and relevant information of the original video, the present invention draws on the idea of ​​retrospective coding, introduces a global semantic discrimination loss, uses the semantic information of the original video to guide the generation of summaries, constrains the summarization generation process as a whole, and Label information is not required in this process, alleviating the model's dependence on labeled data. However, different from retrospective coding, the present invention starts from obtaining the video frame width information and constructing the maximum semantic information constraint, integrates the video frame context information, does not introduce the mismatch loss b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A video abstraction method based on attention expansion coding and decoding network comprises: regarding a video abstract as a sequence-to-sequence learning process; using time domain correlation between video frames is utilized to obtain a video frame feature sequence from an original video in the SumMe or the TVSum through a pre-training network; taking the video frame feature sequence as inputof an encoder network in an attention expansion coding and decoding network to obtain a semantic information sequence of the video frames, and then obtaining a score corresponding to each video framethrough a multiplicative attention decoding network; the scores of all the video frames forming an abstract sequence; obtaining a semantic information sequence of an abstract sequence through a retrospective encoder, constructing global semantic discrimination loss, introducing a moving average model, learning semantic correlation between the abstract sequence and a video frame feature sequence, obtaining a new abstract sequence retaining important information of an original video, and finally selecting a set final abstract through the new abstract sequence. The robustness of the model is enhanced.

Description

technical field [0001] The invention relates to a video summary. In particular, it relates to a method for video summarization based on attention-expanding codec networks for video processing and indexing. Background technique [0002] With the rapid development of information technology, the explosive growth of video data, redundant and repeated information exists in a large amount of video data, which makes it more difficult for each user to quickly obtain the required information. In this case, video summarization technology came into being. Its goal is to generate a compact and comprehensive summary to provide users with the maximum information of the target video in the shortest time to satisfy people who want to browse videos quickly and accurately. The need for important information and the ability to improve people's access to information. [0003] Research on video summarization is generally divided into two categories: supervised learning and unsupervised learnin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/738G06F16/783H04N21/8549
CPCG06F16/739G06F16/783H04N21/8549
Inventor 冀中焦放庞彦伟
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products