Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep feature fusion video copy detection method based on attention mechanism

A deep feature and video fusion technology, applied in neural learning methods, computer components, digital data information retrieval, etc., can solve the problems of low algorithm processing efficiency, low accuracy, and difficulty in editing video processing effects to meet expectations.

Pending Publication Date: 2020-06-05
深圳市网联安瑞网络科技有限公司
View PDF6 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The current solution is to use traditional image processing or global feature extraction methods. Traditional algorithms have low processing efficiency and low accuracy, while global feature extraction methods are good for general editing video processing, but for various complex transformations The edited video processing effect is difficult to meet expectations
Both traditional image processing and global feature extraction methods have certain defects for multimedia videos on the Internet.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep feature fusion video copy detection method based on attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and are not intended to limit the present invention, that is, the described embodiments are only some of the embodiments of the present invention, but not all of the embodiments. The components of the embodiments of the invention generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep feature fusion video copy detection method based on an attention mechanism, and the method comprises the steps: (1) extracting a frame image from video data, and constructing an image pyramid through employing different scales; (2) taking the deep convolutional neural network model as a basic network, and adding an attention mechanism into a middle convolutional layer of the deep convolutional neural network model; (3) inputting the frame image and the image pyramid into a deep convolutional neural network model of an attention mechanism, and obtaining fusion features through splicing and fusion; (4) training a deep convolutional neural network model in a metric learning mode; and (5) obtaining source video data through similarity calculation by utilizing thetrained deep convolutional neural network model. According to the method, an attention mechanism and fusion of global features and local features are utilized, the problems that a traditional image processing method is low in efficiency and low in precision can be solved, and the problem that the global features cannot adapt to various complex changes can also be solved.

Description

technical field [0001] The invention relates to the technical field of multimedia information processing, in particular to an attention mechanism-based deep feature fusion video copy detection method. Background technique [0002] In today's mobile Internet era, due to the complexity of multimedia video data, the emergence of various video editing software, and a wide range of sources, it is more difficult to prevent the indiscriminate dissemination of tampered video data. Relevant network supervision departments want to effectively supervise online multimedia video data, and cannot rely solely on human supervision and user reports. [0003] The current solution is to use traditional image processing or global feature extraction methods. Traditional algorithms have low processing efficiency and low accuracy, while global feature extraction methods are good for general editing video processing, but for various complex transformations The edited video processing effect is dif...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/732G06K9/62G06N3/04G06N3/08
CPCG06F16/7328G06N3/08G06N3/045G06F18/22G06F18/253
Inventor 贾宇沈宜董文杰张家亮曹亮
Owner 深圳市网联安瑞网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products