Video abstract method based on supervised video segmentation

A technology of video summarization and video segmentation, applied in the field of video summarization of multimedia social networking, can solve the problems of not considering video structure, high similarity of summaries, and inability to display actions, so as to improve accuracy and interest, and improve efficiency and accuracy. Effect

Active Publication Date: 2018-04-06
TIANJIN UNIV
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This problem is generally extremely challenging and has been a recent research topic in video processing
[0005] There are two shortcomings of this algorithm: one is that the structure of the video, that is, the temporal continuity information between frames, is not considered when the video is decomposed into frames for processing, so the extracted summary is difficult to describe an unedited video semantic information
Therefore, the summary results often have less diversity, resulting in high similarity between summary
[0007] The video segmentation using edge detection in reference [3] often has the disadvantage that a visually coherent action is divided due to the edge detection of the lens, and the complete action cannot be displayed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video abstract method based on supervised video segmentation
  • Video abstract method based on supervised video segmentation
  • Video abstract method based on supervised video segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0033] In order to solve the above problems, research methods that can comprehensively capture the structural information and similarity information of the training set videos, improve the accuracy of video segmentation and abstraction, and the degree of interest are needed.

[0034] Studies have shown that similar videos have similar structures, which can be passed to the test video by capturing the structured information of the training video, and the segmentation and summary of the test video can be known as the structured information. Embodiments of the present invention propose a video summary learning method based on supervised video segmentation, see figure 1 , see the description below:

[0035] 101: Obtain the kernel matrix of the test video through the similarity matrix and the kernel matrix of the training video, and use the kernel matrix as a regularized Laplacian matrix for time-domain subspace clustering;

[0036] 102: Introduce the time-domain Laplace regulariz...

Embodiment 2

[0047] The scheme in embodiment 1 is further introduced below in conjunction with specific calculation formulas and examples, see the following description for details:

[0048] 201: to N 1 frames of the training video and N 2 The test video of the frame, extract the color histogram feature (512 dimensions) respectively, construct a N 2 *N 1 The similarity matrix S k ;

[0049] Among them, the similarity matrix S k elements in Calculated, v i and v k are the color histogram features of the test and training videos; σ is a positive adjustable parameter; i is the index of the i-th frame of the video; k is the index of the k-th frame of the video.

[0050] 202: Get the kernel matrix L of the training video k , the kernel matrix L k Frame score matrix by user's review Obtained by diagonalization;

[0051] gt_score is the user's score for each frame of the video, for example: a video with 950 frames, gt_score is a column matrix of 950*1, which is the information of th...

Embodiment 3

[0072] Combined with the specific calculation formula, the appended figure 2 and 3 The scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

[0073] The database used in this experiment is SumMe. The SumMe database consists of 25 videos with an average length of 2 minutes and 40 seconds. Each video is edited and summarized by 15 to 18 people, and the average length of human summarization (based on footage) is 13.1% of the original video.

[0074] In all experiments, the automatic summarization results of our method were evaluated by comparing the algorithmic results (A) of our method with human-processed summaries (B) and obtaining scores (F), precision (P) and recall (R) ( A), as follows:

[0075]

[0076]

[0077]

[0078] Table 1 below shows the F-score scores of SumMe videos.

[0079] Table 1

[0080]

[0081]

[0082] Comparing the video summarization results obtained by this method with the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video abstract method based on supervised video segmentation, and the method comprises the following steps: obtaining a kernel matrix of a test video through a similarity matrix and a kernel matrix of a training video, and taking the kernel matrix as a regularization Laplacian matrix of time domain subspace clustering; introducing a time domain Laplacian regularization expression, obtaining a target function, solving the target function through an alternating direction method of a multiplying unit, obtaining each video frame after segmentation, and calculating the score of each video frame; selecting a proper segment as the video abstract through a backpacking method; comparing the obtained video abstract with a video abstract marked manually, adjusting all parameters for many tests, and enabling the video abstract to be closer to the video abstract marked manually. The method improves the video abstract efficiency and accuracy.

Description

technical field [0001] The invention relates to the field of multimedia social video summarization, in particular to a video summarization method based on supervised video segmentation. Background technique [0002] Most YouTube videos contain long and unedited semantics that cannot be quickly understood. Users often want to browse videos to quickly get hints about semantic content. With the explosive growth of video data, there is an urgent need to develop automatic video summarization algorithms to address this problem by providing summaries of short videos of longer videos. An ideal video summary will include all important video segments and keep the length short. This problem is generally extremely challenging and has been a recent research topic in video processing. By taking long videos as input and generating short videos (or keyframe sequences) as output, video summarization has great potential to take statistics in raw videos and make them more browsable and sear...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06F17/30
CPCG06F16/739G06F18/232
Inventor 张静石玥苏育挺井佩光
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products