Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and system for detecting deep forged video based on time sequence inconsistency

A video detection and consistency technology, applied in the field of deep fake video detection, can solve the problems of high error rate and low detection rate of fake video, and achieve the effect of overcoming the probability of misjudgment and improving detection accuracy

Active Publication Date: 2021-03-12
CHONGQING UNIV OF POSTS & TELECOMM
View PDF12 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since the forgery of the video is carried out frame by frame, it will introduce inconsistencies in the timing of expressions, lighting, etc. between the front and rear frames. It is difficult to capture this timing inconsistency with intra-frame detection methods. At the same time, some current The detection methods for timing inconsistency mainly rely on the information before the current frame, without considering the future frame information, making the detection rate of forged videos low and the error rate high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for detecting deep forged video based on time sequence inconsistency
  • Method and system for detecting deep forged video based on time sequence inconsistency
  • Method and system for detecting deep forged video based on time sequence inconsistency

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0061] figure 1 is a schematic flowchart of a timing-based inconsistency detection method according to an exemplary embodiment, according to figure 1 , including the following steps:

[0062] Step S1: Divide the obtained forged video set into a training set, a verification set, and a test set according to a certain ratio, wherein the number of real videos and forged videos should be equal, mark the real video and the forged video, and mark the real video It is 0, and the fake video is marked as 1; use ffmpeg to take a certain number of frames according to the frame rate of each video; use the mtcnn face detector to detect the face area of ​​the captured frame, and then perform face landmarks on the face area Align, save face images, and normalize them according to 240*240 pixels.

[0063] Step S2: Input the video frame processed by S1 into the Xception network for feature extraction training. Due to the global pooling layer of the Xception network, some channel and spatial ...

no. 2 example

[0090] see Figure 5 A deep fake video detection system based on the consistency between video frames is shown, which is characterized in that the method includes the following units: a data preprocessing module, a video frame feature extraction module, a video frame timing analysis module, and a fake video classification module.

[0091] The data preprocessing module is used to process the experimental data, mainly including three units: data set division, frame extraction, and face extraction. When dividing the data set, it is divided into training set, verification set, and test set according to a certain ratio; then the video is divided into frames, and finally the face is extracted and the obtained face pictures are normalized into a unified Pixels, when extracting a face, first use the face detector to frame the face area, and then extract the face through face landmark alignment, so as to improve the detection rate of the face.

[0092] The video frame feature extracti...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a deep forged video detection method and system based on time sequence inconsistency, and belongs to the field of video detection. The method comprises the following steps: S1, acquiring a video data set, and preprocessing the data to obtain a face image of a video frame; S2, inputting the video frame into an attention mechanism module network of a fine-tuning network Xception + convolution module for training, wherein the attention mechanism module network is used for extracting video frame-level features; S3, performing feature extraction of continuous frames of thevideo by using the trained Xception network, and inputting the features into a bidirectional long-short-term memory network + conditional random field network model for training; S4, performing forgery detection on a to-be-tested video by using the trained model. According to the method, the time sequence inconsistency of the video between the frames is caused by using a counterfeiting technology,and the detection of the deeply counterfeited video is improved to a certain extent by combining a bidirectional long-short-term memory network and a conditional random field algorithm.

Description

technical field [0001] The invention belongs to the field of video detection, and relates to a deep fake video detection method and system based on timing inconsistency. Background technique [0002] With the development of society and the advancement of technology, more and more people share their lives by sharing photos and videos on social software. However, as video forgery tools emerge in endlessly (Adobe Premiere, Adobe Photoshop, Lightworks), people can more easily forge videos, and some lawbreakers gain profits by forging photos and videos. At the same time, with the rise of machine learning technology, the combination of deep learning and video forgery technology, through the training of codecs for face forgery, makes it more difficult to distinguish the authenticity of forged videos. For example, the face-swapping software ZAO, people only need a photo to replace the face in a video with the face in the photo. These forgery techniques call into question the integ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V40/172G06V20/46G06V10/44G06N3/045G06F18/241
Inventor 陈龙陈函邱林坤
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products