Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neutral network processor based on data compression, design method and chip

A technology of data compression and neural network, applied in the direction of biological neural network model, physical realization, etc., can solve problems such as accelerating calculation speed, and achieve the effect of improving calculation speed and operating energy efficiency

Active Publication Date: 2017-02-22
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF3 Cites 157 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Literature "Albericio J, Judd P, Hetherington T, et al.Cnvlutin: ineffective-neuron-free deep neural network computing[C] / / Computer Architecture(ISCA),2016ACM / IEEE 43rd Annual International Symposium on.IEEE,2016:1 -13." By providing large-scale storage units on the chip to achieve large-scale parallel computing and based on this, the compression of data elements is realized, but this method relies on large-scale on-chip storage units to meet its parallel computing needs, Not suitable for embedded devices; "Chen Y H, Emer J, Sze V.Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks[J].2016." Data reuse and power by sharing data and weights The gating method closes the calculation of data 0, which can effectively improve energy efficiency, but this method can only reduce the power consumption of the operation and cannot skip elements with a value of 0 to speed up the calculation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neutral network processor based on data compression, design method and chip
  • Neutral network processor based on data compression, design method and chip
  • Neutral network processor based on data compression, design method and chip

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The inventor found in the research of neural network processors that there are a large number of data elements with a value of 0 in the process of neural network calculations. After data operations such as multiplication and addition, such elements have no numerical impact on the calculation results, but the neural network When the network processor processes these data elements, it will occupy a large amount of on-chip storage space, consume redundant transmission resources and increase the running time, so it is difficult to meet the performance requirements of the neural network processor.

[0037] After analyzing the calculation structure of the existing neural network processor, the inventor finds that the data elements of the neural network can be compressed to achieve the purpose of accelerating the operation speed and reducing energy consumption. The prior art provides the basic framework of a neural network accelerator. The present invention proposes a data comp...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a neutral network processor based on data compression, a design method and a chip. The processor comprises at least one storage unit used for storing operating instructions and data participating in calculation, at least one storage unit controller used for controlling the storage unit, at least one calculation unit used for executing calculation of a neutral network, a control unit connected with the storage unit controllers and the calculation units and used for acquiring instructions stored by the storage unit through the storage unit controllers and analyzing the instructions to control the calculation units, and at least one data compression unit used for compressing data participating in calculation according to a data compression storage format. Each data compression unit is connected with the corresponding calculation unit. Occupancy of data resources in the neutral network processor is reduced, the operating rate is increased, and energy efficiency is improved.

Description

technical field [0001] The invention relates to the field of hardware acceleration for neural network model calculation, in particular to a data compression-based neural network processor, design method and chip. Background technique [0002] Deep learning technology has developed rapidly in recent years. Deep neural networks, especially convolutional neural networks, have achieved a wide range of applications in image recognition, speech recognition, natural language understanding, weather prediction, gene expression, content recommendation, and intelligent robots. [0003] The deep network structure obtained by deep learning is an operational model, which contains a large number of data nodes, each data node is connected to other data nodes, and the connection relationship between each node is represented by weight. With the continuous improvement of the complexity of the neural network, the neural network technology has many problems in the actual application process, su...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/06
CPCG06N3/06Y02D10/00
Inventor 韩银和许浩博王颖
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products