Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression method and apparatus for deep neural network model, terminal and storage medium

A technology of deep neural network and compression method, which is applied in the field of compression method and device of deep neural network model, storage medium and terminal, and can solve the problems of low accuracy and validity of deep neural network model.

Inactive Publication Date: 2018-11-02
SPREADTRUM COMM (SHANGHAI) CO LTD
View PDF0 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this can compress the neural network model to a certain extent, the accuracy and effectiveness of the simplified deep neural network model are low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method and apparatus for deep neural network model, terminal and storage medium
  • Compression method and apparatus for deep neural network model, terminal and storage medium
  • Compression method and apparatus for deep neural network model, terminal and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0065] At present, the simplification and compression methods of the deep neural network model are mainly divided into two categories: the method of changing the density of the deep neural network model and the diversity method of changing the parameters of the deep neural network model.

[0066] Change the density method of the deep neural network model, and achieve the purpose of compression by changing the sparseness of the neural network. In some algorithms, a relatively small threshold is usually given to delete small-value parameters in the deep neural network model, which is highly subjective, and it is necessary to perform too many parameter adjustments for neural networks with different structures to obtain ideal simplification. Effect. Other algorithms screen input nodes based on the contribution relationship between input nodes and output responses. Such algorithms only target single-hidden layer neural networks and do not sufficiently process hidden layer parameter...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compression method and apparatus for a deep neural network model, a terminal and a storage medium. The method comprises the steps of simplifying the deep neural network modelbased on obtained information of contribution degrees of simplified units of all layers in the deep neural network model, until the simplified deep neural network model meets a preset precision demand; re-training the simplified deep neural network model to obtain a retrained deep neural network model; when it is determined that the retrained deep neural network model does not meet a preset compression volume demand, carrying out iterative quantization on parameters of the layers in the retrained deep neural network model; and performing N-time iterative quantization on the retrained deep neural network model, until the quantized deep neural network model meets the preset compression volume demand. According to the scheme, the precision and effectiveness of the deep neural network model can be taken into account when the deep neural network model is compressed.

Description

technical field [0001] The present invention relates to the technical field of information processing, in particular to a compression method and device, a terminal, and a storage medium of a deep neural network model. Background technique [0002] With the rapid development of deep neural network-related technology research, a large number of deep neural network-related technologies have emerged in related fields, such as convolutional neural networks in the field of vision and recurrent neural networks in the field of speech recognition or natural language processing. etc. These neural network technologies have greatly improved the processing accuracy in the corresponding fields. [0003] Deep Neural Network Compared with shallow learning, the development potential of deep neural network is huge. Through the multi-layer processing structure of the deep neural network model, the characteristic features of the sample can be extracted and analyzed, and the sample features can...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04
CPCG06N3/045
Inventor 林福辉赵晓辉
Owner SPREADTRUM COMM (SHANGHAI) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products