Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video fingerprint extraction method based on slowly-changing visual features

A video fingerprint and visual feature technology, applied in special data processing applications, instruments, electrical digital data processing, etc., can solve problems such as difficult to fully describe, complex information, etc., to avoid limitations, low computational complexity, and easy to implement Effect

Inactive Publication Date: 2018-05-11
TIANJIN UNIV
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since the information expressed by image and video data is very complex, it is difficult for hand-designed models to fully describe the information, especially some abstract features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video fingerprint extraction method based on slowly-changing visual features
  • Video fingerprint extraction method based on slowly-changing visual features
  • Video fingerprint extraction method based on slowly-changing visual features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] In order to realize a robust and efficient learning method for video fingerprints, an embodiment of the present invention proposes a method for extracting video fingerprints based on slowly changing visual features, see figure 1 and figure 2 , see the description below:

[0036] 101: Generate a random distorted image for each image in the training set, use the original image and the distorted image to train spatial features, and extract network parameters;

[0037] Among them, this step is specifically:

[0038] 1) Add random distortion to each image to obtain the corresponding distorted image;

[0039] The type of distortion may be set according to actual needs, and is not limited in this embodiment of the present invention.

[0040] 2) Normalize all images (that is, including the original image and the distorted image) to n×n, with a mean of 0 and a variance of 1, and different images as different classes, each image and its distorted version as the same class, by...

Embodiment 2

[0059] Taking an actual training process as an example, the scheme in Embodiment 1 is introduced in detail in combination with specific calculation formulas, and the video fingerprint learning method provided by the embodiment of the present invention is described in detail. See the following description for details:

[0060] 201: image preprocessing;

[0061] 15,000 images were randomly selected from ImageNet as the training data of the spatial feature extraction network, and the images were normalized to a standard size of 512×512, and filtered by mean value.

[0062] 202: Generate a random distorted version of the image;

[0063] Apply a random distortion or transformation to each image. The types of these distortions or transformations include: JPEG lossy compression, Gaussian noise, rotation, median filtering, histogram equalization, gamma correction, adding speckle noise, looping filtering. A total of 30,000 images.

[0064] 203: training spatial feature extraction ne...

Embodiment 3

[0079] Below in conjunction with specific experimental data, the scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

[0080] In the above-mentioned embodiment 2, 600 video sequences downloaded from YouTube and another 201 video sequences from TRECVID total 801 videos are selected as test videos, and the videos are different in pairs and have no overlap with the training data. Nine common content-preserving distortions are applied to each video, and each distortion is selected to a different degree, as shown in the following table:

[0081] Table 1 Distortion types and parameter settings

[0082]

[0083] Each original video undergoes these distortions to generate 17 copy versions, and the total number of original videos and copies in the test library is 14418. Each video is normalized by the method of step 204 to obtain 32 × 32 × 20, which is divided into two video sequences of 32 × 32 × 10 and obtained by the m...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video fingerprint extraction method based on slowly-changing visual features. The method comprises: For each image in a training set, an image of a random distortion versionis generated, spatial features are trained by using an original image and a distortion image, and parameters of a network are extracted; spatial and sequential normalization preprocessing is carried out on video data of the training set, wherein the data are processed into ones with a fixed dimension and a fixed frame number; a feature sequence of the network is extracted based on the spatial features and the extracted sequence is used as training data and an LSTM network is trained; and a network is extracted from the trained spatial features, the obtained features are in cascade connection with the LSTM network, and a video fingerprint is extracted. According to the method disclosed by the invention, the visual perception principle of the human being is simulated and thus the video fingerprint can be extracted. The method has the high robustness and high efficiency. The video fingerprint extraction method can be applied to fields of video copy detection and video retrieval and the like.

Description

technical field [0001] The invention relates to the field of video copy detection, in particular to a method for extracting video fingerprints based on slowly changing visual features. Background technique [0002] With the development of video sharing websites and mobile Internet, the video data on the network has increased dramatically, which has brought about problems such as copyright infringement and illegal content dissemination. Due to the huge amount of data, it is impossible to rely on manpower to search for illegally copied videos. To solve this problem, some video copy detection methods have been proposed successively in recent years. Video copy detection is to search for its copy version from massive data on the premise of knowing the source video. Video fingerprinting algorithm is the key technology of copy detection, which describes the main content of the video as a short content summary, similar to human fingerprints. Video copy detection technology can id...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06F17/30
CPCG06F16/783G06F18/21G06F18/214
Inventor 李岳楠汪冬冬
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products