Method and device for protecting neural network model security

A neural network model and network model technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as attackers or gray-produced attacks, stealing model sensitive information, etc., to achieve privacy protection and guarantee prediction. performance, the effect of reducing resource consumption

Active Publication Date: 2021-02-05
ALIPAY (HANGZHOU) INFORMATION TECH CO LTD
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, when the training data is sensitive or private data such as user personal information, the trained neural network carries a large amount of sensitive and private information. If the model is directly exposed, it is easy to be attacked by an attacker or a gray product through the model to steal the information in the model. sensitive information carried

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for protecting neural network model security
  • Method and device for protecting neural network model security
  • Method and device for protecting neural network model security

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] Multiple embodiments disclosed in this specification will be described below in conjunction with the accompanying drawings.

[0021] As mentioned earlier, if all the trained neural network models are exposed, it is easy for attackers or gray products to attack through the model and steal sensitive information in the model. For example, after obtaining the neural network model, the attacker can infer the statistical characteristics remembered in the network layer by visualizing it. For example, suppose the neural network model is used to decide whether to provide a certain service to the user, where The feature remembered by a certain network layer may be: if the user’s age is older than 52, no loan service will be provided. At this time, the attacker can modify the user’s age (such as changing 54 to 48), so that illegal users can use loan service. For another example, after obtaining the neural network model, the attacker can observe the data distribution of the output...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of this specification provides a method for protecting the safety of a neural network model, including: obtaining a neural network model, including multiple network layers obtained by training with training data; for any first network layer among them, fixing other network layer parameters In the case of , the above-mentioned training data is used to perform the first parameter adjustment on the first network layer to obtain the first fine-tuning model; determine the first index value corresponding to the preset performance index of the first fine-tuning model, and the index of the preset performance index The value depends on the corresponding model, the relative size between the test loss on the test data and the training loss on the above training data; similarly, using the training data and test data to perform the second parameter adjustment on the first network layer, we get The second fine-tuning model, and determine the second index value; based on the relative size of the first index value and the second index value, determine the information sensitivity corresponding to the first network layer, and if it is greater than a predetermined threshold, the first network layer for security processing.

Description

technical field [0001] The embodiments of this specification relate to the technical field of data security, and in particular to a method and device for protecting the security of a neural network model. Background technique [0002] At present, it is a classic practice in the industry to use a large amount of data to train a neural network so that the neural network has a good prediction effect. The neural network remembers the characteristics of the data to give accurate predictions when making predictions. However, when the training data is sensitive or private data such as user personal information, the trained neural network carries a large amount of sensitive and private information. If the model is directly exposed, it is easy to be attacked by an attacker or a gray product through the model to steal the information in the model. Sensitive information carried. [0003] Therefore, there is a need for a solution that can protect the security of the neural network mod...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F21/57G06Q10/06G06N3/04G06N3/08
CPCG06F21/57G06Q10/06393G06N3/08G06N3/045
Inventor 王力周俊
Owner ALIPAY (HANGZHOU) INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products