Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Adaptive Asymmetric Quantized Compression Method for Deep Neural Network Models

A deep neural network and compression method technology, applied in the field of asymmetric quantization deep neural network model compression, can solve the problems of insufficient representation ability, low parameter space utilization, instability, etc., to reduce the degree of parameter redundancy, improve the The effect of recognition accuracy

Active Publication Date: 2020-11-24
BEIJING UNIV OF TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] Aiming at the deficiencies in the above-mentioned prior art, the present invention provides an adaptive asymmetric quantized deep neural network model compression method to solve the instability caused by using the approximate calculation of the hypothetical scene to obtain the threshold; using three The problem of insufficient representation ability caused by meta-value quantization; and the problem of low parameter space utilization caused by ternary value compression to 2-bit

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Adaptive Asymmetric Quantized Compression Method for Deep Neural Network Models
  • An Adaptive Asymmetric Quantized Compression Method for Deep Neural Network Models
  • An Adaptive Asymmetric Quantized Compression Method for Deep Neural Network Models

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0042] see figure 1 , the present embodiment provides an adaptive asymmetric quantized deep neural network model compression method, the method comprising the following steps:

[0043] S101, during the training of the deep neural network, for each batch of training process, before the forward propagation starts to calculate, adaptively quant...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-adaptive asymmetric quantization deep neural network model compression method, which comprises the following steps of: adaptively quantizing the weight of each layer offloating point of a network into an asymmetric ternary or quaternary value in the training process of each batch before forward propagation starts to calculate during deep neural network training; ina back propagation parameter updating stage, carrying out parameter updating by using the original floating point type network weight; and finally, performing compression storage on the trained quantized deep neural network. According to the method, the parameter redundancy degree of the deep neural network is reduced, the residual parameters are adaptively quantified, the network model is compressed to the greatest extent, and the identification accuracy of the quantization method on the deep network and a big data set is improved.

Description

technical field [0001] The invention relates to the technical field of deep neural network model compression, in particular to an adaptive asymmetric quantized deep neural network model compression method. Background technique [0002] In recent years, deep learning has gradually replaced the application of traditional machine learning in daily life. In a series of machine learning tasks such as speech recognition, image classification, and machine translation, deep neural networks have achieved certain results. However, the classic deep neural network model, due to its heavy hierarchical structure, brings millions of floating-point network parameter calculations, making it difficult for most networks to be deployed in mobile devices and embedded devices and maintain good processing performance . How to maximize the compression of neural network parameters and ensure that the recognition performance of the original network is gradually becoming an important research directi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/08
CPCG06N3/08
Inventor 张丽潘何益
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products