Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep fake face data identification method

A face and data technology, which is applied in the identification field of deep fake face data, can solve the problems of image detection accuracy drop, convolution operation limited receptive field, failure to consider pixel relationship, etc., to improve robustness , Improving the effect of generalization ability

Inactive Publication Date: 2021-10-22
FUDAN UNIV
View PDF1 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In general, DeepFake identification mainly faces three challenges: First, due to the development of DeepFake technology, high-quality forged videos are often not recognized by the naked eye, so it is necessary to extract multi-scale semantic information in the image to effectively conduct Second, after the forged image is compressed, the forged clues will also be covered up, so the detection accuracy of the existing detection methods on the compressed image will drop significantly; third, the face forgery identification often trains the identification model in a supervised manner , to detect whether the face in the image has been subjected to specific operations, which leads to the problem of overfitting in most of them: for images generated by specific face operations, the detection method can achieve higher detection accuracy , and when it is used to detect face manipulation methods that do not exist in the training data, the discriminative performance will drop significantly
Although decent detection results are achieved by stacked convolutions, they are good at modeling local information, but fail to consider global pixel relationships due to the limited receptive field of convolution operations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep fake face data identification method
  • Deep fake face data identification method
  • Deep fake face data identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] Below through specific embodiment, further describe the present invention.

[0047] Step 1: Input image X ∈ R H×W×C , where H and W are the height and width of the image, respectively, and C is the number of channels of the image (generally 3). f represents the backbone network (Efficientnet-b4[8] in the implementation of the present invention), f t is the feature map extracted from layer t. Feature maps are first extracted from the shallow layers of f

[0048] Step 2: To capture multi-scale forgery patterns, the feature map is segmented into spatial image patches of different sizes, and the self-attention between image patches of different heads is calculated. Specifically, from f s The extracted shape is r h ×r h ×C spatial image block Among them, N=(H / r h )×(W / r h ), and reshape them into 1D vectors of the hth head.

[0049]Step 3: Flatten the image patches using a fully connected layer Embedding to query embedding middle. Follow similar operations ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of neural network security, and particularly relates to a deep fake face data identification method. According to the method, fine counterfeit features are captured on different scales and are used for deep counterfeit detection. Specifically, a multi-modal multi-scale converter (M2TR) is introduced, and image blocks of different sizes are operated by using a multi-scale converter so as to detect local context inconsistency of different scales. In order to improve the detection result and improve the robustness of image compression, frequency information is introduced into the M2TR, and the frequency information is further combined with RGB features through a cross-modal fusion module. The effectiveness of the method is verified through a large number of experiments, and the performance of the method exceeds that of the most advanced deep counterfeiting detection method at present.

Description

technical field [0001] The invention belongs to the technical field of neural network security, and in particular relates to a method for identifying deep fake human face data. Background technique [0002] Deep forgery, that is, DeepFake, is derived from the words deep learning and fake. It refers to the use of computer graphics and convolutional neural network (CNN) methods to replace the faces in the video with other people's faces, or to replay the faces of others. A technology that generates a face image of a specific expression to achieve the purpose of generating a fake video. [0003] In recent years, with the development of deep learning, methods of using generative networks to replace or manipulate human faces in images / videos have emerged in an endless stream, which has made great progress in DeepFake technology. Some synthetic fake images / videos cannot even be artificially detected by the naked eye. identification. At the same time, the development of DeepFake ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06K9/46G06K9/62G06N3/04G06N3/08G06T3/40G06T5/10
CPCG06N3/04G06N3/08G06T5/10G06T3/4038G06T2200/32G06T2207/20052G06T2207/20081G06T2207/20084G06T2207/30201G06F18/253G06F18/24
Inventor 姜育刚王君可陈静静吴祖煊
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products