Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video abstract key frame extraction method based on abstract space feature learning

A technology of video summarization and extraction method, which is applied in the direction of instruments, character and pattern recognition, electrical components, etc., and can solve problems such as poor quality of key frames

Active Publication Date: 2015-11-04
NORTHWESTERN POLYTECHNICAL UNIV
View PDF3 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to overcome the shortcomings of the poor quality of key frames extracted by existing video summary key frame extraction methods, the present invention provides a video summary key frame extraction method based on abstract space feature learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video abstract key frame extraction method based on abstract space feature learning
  • Video abstract key frame extraction method based on abstract space feature learning
  • Video abstract key frame extraction method based on abstract space feature learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] refer to Figure 1-2 . The present invention is based on the abstract key frame extraction method of abstract spatial feature learning and the specific steps are as follows:

[0040] Step 1, video data preprocessing.

[0041] In order to reduce the redundancy of video data, the video frames are uniformly sampled at first, specifically, one video frame is taken every second for analysis. Then a color histogram in HSV space is established for each selected video frame. Among them, the H channel is divided into 16 equal parts, the S channel and the V channel are divided into 4 equal parts respectively, and the statistical data of the three channels are normalized to obtain the feature vector of each frame. Finally, the feature matrix X={x of the video is obtained 1 ,x 2 ,...,x n}, and use it as the input number. Where n is the number of video frames after uniform sampling, x n is the feature vector of the nth frame.

[0042] Step 2, the video data is mapped to a h...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video abstract key frame extraction method based on abstract space feature learning, so as to solve the technical problem that the key frame extracted by the present video abstract key frame extraction method has a poor quality. According to the technical scheme, video frames are uniformly sampled, color histogram features are extracted from each frame after sampling, and a feature matrix X of the video frames serves as input data; a Lipschtiz smooth real function is used, the feature matrix X of the video frames is mapped to the abstract space S, a weight matrix W is used for carrying out representative frame extraction; a hamming distance between two image fingerprints is calculated, if the hamming distance H of two representative frame image fingerprints is smaller than a threshold, the two video frames are regarded as similar frames, a key frame collection meeting requirements of representativeness and difference is obtained, and a video abstract is obtained when the collection is ranked according to a time order. As measurement of representativeness and difference of the key frames can enable the video abstract to display the video content in a condition without information redundancy, and the quality of the video abstract key frame is improved.

Description

technical field [0001] The invention relates to a video abstract key frame extraction method, in particular to a video abstract key frame extraction method based on abstract space feature learning. Background technique [0002] Literature "S.Avila, A.Lopes, A.Luz Jr., and A.Araujo. VSUMM: A Mechanism Designed to Produce Static Video Summaries and A novel Evaluation Method. Pattern Recognition Letters, 32(1):56–68, 2011" disclosed a key frame extraction algorithm based on video frame clustering. This algorithm takes the color histogram feature of the video frame as input data, and measures the similarity of the video frame by the Euclidean distance, and then uses the k-means clustering method to assign the video frame to different clusters, and finally selects each cluster The center is used as a keyframe. Among them, the number of cluster centers is determined by the number of shots in the video, and the shot boundary is determined by the peak value of the Euclidean distan...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N21/8549G06K9/62
CPCH04N21/8549G06F18/23
Inventor 李学龙卢孝强赵斌
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products