Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Attack Judgment Method for Fooling Interpretable Algorithms of Deep Neural Networks

A technology of deep neural network and explaining algorithm, which is applied in the field of attack judgment to fool the explainable algorithm of deep neural network, and achieves good robustness

Active Publication Date: 2022-06-28
ZHEJIANG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical problems to be solved in the present invention include: an effective attack algorithm; the size of the disturbance added to the noise must be constrained; both single-target object and multi-target object image classification interpretation can be fooled; multi-target object image classification interpretation is fooled to be distributed Suitable

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Attack Judgment Method for Fooling Interpretable Algorithms of Deep Neural Networks
  • An Attack Judgment Method for Fooling Interpretable Algorithms of Deep Neural Networks
  • An Attack Judgment Method for Fooling Interpretable Algorithms of Deep Neural Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] The present invention will be further described below with reference to the accompanying drawings and embodiments.

[0060] An example of the complete method implementation according to the content of the present invention and its implementation are as follows:

[0061] The present invention is implemented on the deep neural network VGG19 model trained on the ImageNet data set. Taking Grad-CAM as an example, the detailed description is as follows:

[0062] 1) Generate a random initialization noise and generate a binary mask, if it is a single image of a single target object, such as figure 2 As shown in the first column, set the value of the corresponding square area to 0, and other areas to 1; if it is an image of a single multi-target object, such as Figure 5 As shown in the first column, the value corresponding to the position of the two square areas is set to 0 at the same time, and the other areas are set to 1.

[0063] 2) Multiply the noise and the binary mask...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an attack method for fooling the interpretability of a deep neural network. Each input image generates a perturbed image after adding noise in a certain area; the perturbed image is used to construct a loss function term. On the premise of keeping the generated perturbed image classification results the same as the original image classification results, the interpretability algorithm is used to explain the classification results, and the highlighted area of ​​the interpretation is located in the perturbed area; the noise is gradually limited by using the Adam optimization algorithm Until the disturbance is not perceived visually, the disturbance is not obvious, and finally a disturbance image that makes the interpretation wrong is generated. The invention combines attack and interpretability tasks, can effectively attack five deep neural network interpretable methods, and the size and position of the attack area can be changed arbitrarily, and the robust performance of the explanation under attack can be measured.

Description

technical field [0001] The invention relates to an interpretable attack judgment method for image processing, in particular to an attack judgment method for fooling an interpretable algorithm of a deep neural network. Background technique [0002] For artificial intelligence systems, the real environment is complex and changeable, and mistakes in the system's decision-making will lead to heavy losses. Therefore, the interpretability of the artificial intelligence system model becomes very important, so that people can understand how the system works and how decisions are formed. , find the cause of the error and improve it. Therefore, it is necessary to study the interpretability of deep learning. However, as artificial intelligence interpretability algorithms continue to be proposed, attacks on interpretable algorithms have emerged. For example, adding disturbance to the input image will change the interpretation effect to a great extent while keeping the prediction resul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/55G06N3/04G06N3/08
CPCG06T7/55G06N3/084G06T2207/10004G06T2207/20081G06T2207/20084G06N3/045
Inventor 孔祥维宋倩倩
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products