Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network model compression method and device based on multi-granularity pruning

A network model and multi-granularity technology, which is applied in the field of video surveillance, deep neural network, and image processing, can solve the problem of not being able to significantly reduce storage and computing resources, and achieve the effect of reducing computing consumption and compressing size

Inactive Publication Date: 2017-11-21
BEIJING ICETECH SCI & TECH CO LTD
View PDF0 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, existing model compression techniques usually reduce the size of the model by sparsifying the model weight values, but cannot significantly reduce the storage and computing resources required to run a deep learning network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network model compression method and device based on multi-granularity pruning
  • Network model compression method and device based on multi-granularity pruning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to enable your examiners to further understand the structure, features and other purposes of the present invention, the attached preferred embodiments are now described in detail as follows. The described preferred embodiments are only used to illustrate the technical solutions of the present invention, not to limit the present invention. invention.

[0037] The network model compression method based on multi-granularity pruning according to the present invention comprises the following one or two or three steps:

[0038] The step of pruning the granularity level of the input channel adopts the unimportant element pruning method to prune the unimportant elements in the granularity level of the input channel of the network model;

[0039] The pruning step of the granularity level of the convolution kernel adopts the unimportant element pruning method to prune the unimportant elements in the granularity level of the convolution kernel of the network model;

[00...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a network model compression method based on multi-granularity pruning, which includes one or two or three steps of the following steps: input channel granularity level pruning step, adopting unimportant element pruning method, input channel of network model The unimportant elements in the granularity level of the network model are pruned; the convolution kernel granularity level pruning step uses the unimportant element pruning method to prune the unimportant elements in the granularity level of the convolution kernel of the network model; the weight parameter granularity level In the pruning step, an unimportant element pruning method is used to prune unimportant elements in the granularity level of the weight parameters of the network model. Compared with the prior art, the present invention can effectively solve the problem of network model compression by pruning multiple levels of granularity.

Description

technical field [0001] The invention relates to image processing, video monitoring and deep neural network, in particular to a network model compression method and device based on multi-granularity pruning. Background technique [0002] In recent years, with the rapid development of artificial intelligence, deep learning networks have made breakthroughs in the field of computer vision, especially in face recognition and image classification, because they form high-level features by combining low-level features and are less affected by environmental changes. Aspects surpass human recognition accuracy. [0003] However, the existing high-performance deep learning network generally has millions or even hundreds of millions of parameters, which makes its storage and computing consumption huge, limiting its application to devices with limited storage and computing resources. Therefore, compressing deep learning network model compression is a key step in solving this problem. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/02
CPCG06N3/02
Inventor 曾建平王军李志国班华忠朱明张智鹏
Owner BEIJING ICETECH SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products