Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and apparatus for quantizing and compressing neural network with adjustable quantization bit width

A neural network and convolutional neural network technology, applied in the field of convolutional neural network quantization and compression, can solve problems such as performance loss of convolutional neural network, and achieve the effect of saving transmission time, reducing memory and storage resource occupation

Active Publication Date: 2017-12-15
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF4 Cites 91 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the above-mentioned problems in the prior art, that is, in order to solve the problem that the performance of the convolutional neural network is greatly lost when the prior art quantizes and compresses the convolutional neural network, an aspect of the present invention provides a convolutional neural network. Methods for quantization and compression of product neural networks, including:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and apparatus for quantizing and compressing neural network with adjustable quantization bit width
  • Method and apparatus for quantizing and compressing neural network with adjustable quantization bit width
  • Method and apparatus for quantizing and compressing neural network with adjustable quantization bit width

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0039] Such as figure 1 As mentioned above, the schematic flow chart of the convolutional neural network quantization and compression method provided by the present invention includes:

[0040] Step 1: Obtain the initial weight tensor of the convolutional layer of the original convolutional neural network, and the initial input feature tenso...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of neural networks, and specifically provides a method and apparatus for quantifying and compressing a convolutional neural network. The invention aims to solve the existing problem of large loss of network performance caused by an existing method for quantifying and compressing a neural network. The method of the invention comprises the steps of obtaining a weight tensor and an input eigen tensor of an original convolutional neural network; performing fixed-point quantization on the weight tensor and the input eigen tensor based on a preset quantization bit width; and replacing the original weight tensor and the input eigen tensor with the obtained weight fixed-point representation tensor and the input feature fixed-point representation tensor to obtain a new convolutional neural network after quantization and compression of the original convolutional neural network. The method of the invention can flexibly adjust the bit width according to different task requirements and can realize quantization and compression of the convolutional neural network without adjusting the algorithm structure and the network structure so as to reduce the occupation of memory and storage resources. The invention further provides a storage apparatus and a processing apparatus, which have the above beneficial effects.

Description

technical field [0001] The invention belongs to the technical field of neural networks, and specifically provides a method and device for quantizing and compressing convolutional neural networks. Background technique [0002] In recent years, with the development of convolutional neural network in the field of target detection and recognition, its detection accuracy has reached the commercial level. At the same time, the rapid development of portable devices (such as mobile terminals, smart devices) has allowed researchers to see the future Opportunities for combining convolutional neural networks with portable devices. However, target recognition based on convolutional neural networks often relies on high-performance GPU (Graphics Processing Unit, Graphics Processing Unit) equipment, which requires a huge amount of computation and consumes a large amount of memory. If it is run on a smart phone or an embedded device The convolutional neural network model will quickly consu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/082G06N3/045
Inventor 程健贺翔宇胡庆浩
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products