Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression method and compression device of deep neural network model, terminal and storage medium

A technology of deep neural network and compression method, which is applied in the field of terminal, storage medium, compression method and device of deep neural network model, and can solve the problems of low accuracy and validity of deep neural network model.

Inactive Publication Date: 2018-11-02
SPREADTRUM COMM (SHANGHAI) CO LTD
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this can compress the neural network model to a certain extent, the accuracy and effectiveness of the compressed deep neural network model are low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method and compression device of deep neural network model, terminal and storage medium
  • Compression method and compression device of deep neural network model, terminal and storage medium
  • Compression method and compression device of deep neural network model, terminal and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] At present, the simplification and compression methods of the deep neural network model are mainly divided into two categories: the method of changing the density of the deep neural network model and the diversity method of changing the parameters of the deep neural network model.

[0048] Change the density method of the deep neural network model, and achieve the purpose of compression by changing the sparseness of the neural network. In some algorithms, a relatively small threshold is usually given to delete small-value parameters in the deep neural network model, which is highly subjective, and it is necessary to perform too many parameter adjustments for neural networks with different structures to obtain ideal simplification. Effect. Other algorithms screen input nodes based on the contribution relationship between input nodes and output responses. Such algorithms only target single-hidden layer neural networks and do not sufficiently process hidden layer parameter...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compression method and a compression device of a deep neural network model, a terminal and a storage medium. The method comprises the following steps: obtaining a trained deep neural network model; quantifying parameters of each layer in the deep neural network model layer by layer, performing iterative quantification N times for the deep neural network model, until realizing that the quantified deep neural network model satisfies preset compression volume requirements, wherein, quantification treatment is executed for the i layer parameters in the trained deep neuralnetwork model as follows: performing clustering analysis for the i layer parameters in the trained deep neural network model, and determining corresponding quantification range and quantification level according to a clustering analysis result; and quantifying the parameters in the quantification range of the i layer according to the determined quantification level. By the scheme above, precisionand validity of the deep neural network model can be considered while the deep neural network model is compressed.

Description

technical field [0001] The present invention relates to the technical field of information processing, and in particular, to a compression method and device, a terminal and a storage medium of a deep neural network model. Background technique [0002] With the rapid development of research on deep neural network related technologies, a large number of deep neural network related technologies have emerged in related fields, such as convolutional neural networks applied in the field of vision and recurrent neural networks applied in speech recognition or natural language processing. etc., these neural network technologies have greatly improved the processing accuracy in the corresponding fields. [0003] Compared with shallow learning, deep neural networks have great potential for development. Through the multi-layer processing structure of the deep neural network model, the representative features of the samples can be extracted and analyzed, the sample features can be trans...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/04
Inventor 林福辉赵晓辉
Owner SPREADTRUM COMM (SHANGHAI) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products