Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Attack judgment method for fooling explainable algorithm of deep neural network

A deep neural network, interpreting algorithm technology, applied in the field of attack judgment that fools deep neural network interpretable algorithms

Active Publication Date: 2021-01-26
ZHEJIANG UNIV
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical problems to be solved in the present invention include: an effective attack algorithm; the size of the disturbance added to the noise must be constrained; both single-target object and multi-target object image classification interpretation can be fooled; multi-target object image classification interpretation is fooled to be distributed Suitable

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Attack judgment method for fooling explainable algorithm of deep neural network
  • Attack judgment method for fooling explainable algorithm of deep neural network
  • Attack judgment method for fooling explainable algorithm of deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] The present invention will be further described below in conjunction with drawings and embodiments.

[0060] According to the example that the summary of the invention complete method of the present invention implements and its implementation situation are as follows:

[0061] The present invention is implemented on the deep neural network VGG19 model trained on the ImageNet data set, taking Grad-CAM as an example, detailed description is as follows:

[0062] 1) Generate a random initialization noise and generate a binary mask, if it is a single image of a single target object, such as figure 2 As shown in the first column, set the value of the position of the corresponding square area to 0, and other areas to 1; if it is a single image with multiple targets, such as Figure 5 As shown in the first column, set the values ​​corresponding to the positions of the two square areas to 0, and set the other areas to 1.

[0063] 2) Multiply the noise and the binary mask, and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an attack judgment method for fooling an explainable algorithm of a deep neural network. The method comprises the steps of: adding noise to an input image in a certain area, and then generating a disturbance image; and constructing a loss function item by utilizing the disturbance image; on the premise that the generated disturbance image classification result is kept the same as the original image classification result, explaining the classification result by using an explainable algorithm, and positioning the explained salient region in the disturbance area; and gradually limiting the noise by using an Adam optimization algorithm until disturbance cannot be visually perceived, so that the disturbance is not obvious, and finally generating a disturbance image in which interpretation is wrong. According to the invention, attacks and explainable tasks are combined, five deep neural network explainable methods can be effectively attacked, the size and the positionof an attack area can be changed at will, and the robust performance of interpretation under the attacks can be measured.

Description

technical field [0001] The invention relates to an interpretable attack judgment method for image processing, in particular to an attack judgment method for fooling a deep neural network explainable algorithm. Background technique [0002] For artificial intelligence systems, the real environment is complex and changeable, and system decision-making mistakes will lead to heavy losses. Therefore, the interpretability of artificial intelligence system models becomes very important, so that people can understand how the system works and how decisions are formed. , find the cause of its error and improve it. Therefore, it is necessary to study the interpretability of deep learning. However, as artificial intelligence explainability algorithms are continuously proposed, attacks on explainable algorithms have emerged. For example, adding perturbation to the input image will greatly change the interpretation effect while keeping the prediction result unchanged. Therefore, it is d...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/55G06N3/04G06N3/08
CPCG06T7/55G06N3/084G06T2207/10004G06T2207/20081G06T2207/20084G06N3/045
Inventor 孔祥维宋倩倩
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products