Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-model cooperative defense method facing deep learning antagonism attack

A technology of deep learning and collaborative defense, applied in neural learning methods, biological neural network models, character and pattern recognition, etc., can solve problems such as indefensibility and low security

Inactive Publication Date: 2018-08-24
ZHEJIANG UNIV OF TECH
View PDF0 Cites 81 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to overcome the deficiencies of the existing adversarial aggregation methods, which have low security and cannot defend against adversarial attacks on deep learning models, the present invention provides a method with high security and effective defense against adversarial attacks on deep learning models. Multi-model collaborative defense method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-model cooperative defense method facing deep learning antagonism attack
  • Multi-model cooperative defense method facing deep learning antagonism attack
  • Multi-model cooperative defense method facing deep learning antagonism attack

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] The present invention will be further described below in conjunction with the accompanying drawings.

[0049] refer to figure 1 , a multi-model collaborative defense method for deep learning adversarial attacks, including the following steps:

[0050] 1) The ρ-loss model is proposed for unified modeling of gradient-based attacks, and the principle of gradient-based confrontation attacks is further analyzed. The process is as follows:

[0051] 1.1) Unify all gradient-based adversarial sample generation methods into an optimized ρ-loss model, which is defined as follows:

[0052] arg min λ 1 ||ρ|| p +λ 2 Loss(x adv ,f pre (x adv ))s.t.ρ=x nor -x adv (1)

[0053] In formula (1), ρ represents the adversarial sample x adv with normal sample x nor The disturbance existing between; f pre ( ) indicates the predicted output of the deep learning model; ||·|| p Represents the p-norm of the disturbance; Loss(·,·) represents the loss function; λ 1 and lambda 2 Is th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A multi-model cooperative defense method facing deep learning antagonism attack comprises the following steps of: 1) performing unified modeling based on a gradient attack to provide a [Rho]-loss model; 2) according to design of a unified model, for an countering attack of a target model fpre(x), according to a generation result of a countering sample, classifying a basic expression form of an attack into two classes; 3) analyzing the parameters of the model, performing parameter optimization of the [Rho]-loss model and search step length optimization of a disturbance solution model for the countering sample; and 4) for the mystique of a black box attack, designing an experiment based on an adaboost concept, generating a plurality of different types of substitution models, used to achievethe same task, for integration, designing a multi-model cooperative defense method with high defense capability through an attack training generator of an integration model with high defense capability, and providing multi-model cooperative detection attack with weight optimal distribution. The multi-model cooperative defense method is high in safety and can effectively defense the attack of a deep learning model for the antagonism attack.

Description

technical field [0001] The invention belongs to the security field of machine learning methods in the field of artificial intelligence. Aiming at the threat of anti-sample attacks in deep learning methods in current machine learning, a multi-model collaborative defense method is proposed to effectively improve its security. Background technique [0002] Due to its good learning performance, the deep neural network model has been widely used in the real world, including computer vision, natural language processing, bioinformatics analysis, etc. Especially in the field of computer vision, face recognition, automatic driving, etc. have appeared through deep learning models to automatically recognize faces and complete automatic understanding of road signs and other images. Therefore, the popularization and application of deep learning is the key to the successful promotion of face recognition and automatic driving technologies. One of the techniques. [0003] With the continuo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06K9/62G06K9/00
CPCG06N3/08G06V10/96G06F18/24G06F18/214
Inventor 陈晋音郑海斌熊晖苏蒙蒙林翔俞山青宣琦
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products