Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep network model compression training method based on generative adversarial neural network

A deep network and training method technology, which is applied in the field of deep network model compression, can solve problems such as decline, decline in accuracy and robustness of deep network models, loss of accuracy and robustness, etc., to achieve a stable training process and speed up model convergence. , the effect of improving the robustness

Inactive Publication Date: 2020-09-04
BEIHANG UNIV
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Whether it is pruning or quantization of the network, the accuracy and robustness of the deep network model will be more or less reduced. As the compression degree of the deep network model continues to increase, the loss of accuracy and robustness will drop sharply.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep network model compression training method based on generative adversarial neural network
  • Deep network model compression training method based on generative adversarial neural network
  • Deep network model compression training method based on generative adversarial neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative work all belong to the protection scope of the present invention.

[0029] The deep network model compression training method based on the generated confrontational neural network provided by the present invention provides possible technical support for transplanting a large-scale deep network model to the FPGA platform. It mainly includes knowledge distillation of network structure information and training strategy of network model. The structural information of the network model is distilled by generating an adversaria...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep network model compression training method based on a generative adversarial neural network. The method comprises the steps: randomly selecting a picture from a trained data set to serve as input of an original network and an initial compressed network; taking the feature extractor of the initial compressed network as a G network; calculating the JS divergence of thefeature indexing of the feature map extracted by the feature extractor of the original network and the initial compressed network as a structural loss function; constructing a comprehensive loss function based on the structural loss function and a self-loss function of the initial compressed network; respectively inputting the feature maps extracted by the original network feature extractor and the initial compressed network feature extractor into a D network, updating the D network or the compressed network according to the judgment result of the D network, and finally enabling the feature map output by the G network to be infinitely close to the feature map of the original network, wherein the compressed network is updated based on the comprehensive loss function, so that the predictionprecision and robustness of the model are improved.

Description

technical field [0001] The present invention relates to the technical field of deep network model compression, and more specifically relates to a deep network model compression training method based on generating an adversarial neural network. Background technique [0002] With the rapid development of deep learning technology, deep neural networks have achieved leapfrog breakthroughs in the fields of computer vision, speech recognition, and natural processing. However, deep learning algorithms have not been widely used in the fields of industry, manufacturing, aerospace and navigation. One of the reasons is that the model of the deep learning network is huge and the amount of calculation is huge. The weight file of a CNN network can easily be hundreds of megabytes, such as AlexNet With 61M parameters and 249MB memory, the memory capacity of complex VGG16 and VGG19 has exceeded 500MB, which means larger storage capacity and more floating-point operations are required. Due t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063G06N3/08
CPCG06N3/063G06N3/08G06N3/045
Inventor 姜宏旭黄双喜李波李晓宾田方正
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products