Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Sparse neural network architecture and realization method thereof

A technology of neural network and implementation method, applied in the field of deep learning of neural network, can solve the problems of large sparsity of activation function, reduce the network sparsity of calculation amount, etc., achieve the balance of calculation amount, eliminate invalid calculation, and improve the utilization rate of hardware resources. Effect

Active Publication Date: 2018-01-19
TSINGHUA UNIV
View PDF5 Cites 53 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Second, due to the application of the activation function will bring a large sparsity
[0012] Third, some compression algorithms of neural networks that are currently very popular reduce the amount of calculation through pruning and quantization, which will also bring about the sparsity of the network.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sparse neural network architecture and realization method thereof
  • Sparse neural network architecture and realization method thereof
  • Sparse neural network architecture and realization method thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0049] figure 2 It is a schematic diagram of the sparse neural network architecture of the embodiment of the present invention, such as figure 2 device, input cache controller and compute array.

[0050] The external memory controller is respectively connected with the weight register, the input register and the output register. The computing array is connected to the input register, the weight register and the output register respectively.

[0051] The i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a sparse neural network architecture and a realization method thereof. The sparse neural network architecture comprises an external memory controller, a weight cache, an inputcache, and output cache, an input cache controller and a computing array, wherein the computing array comprises multiple computing units, each row of reconfigurable computing units in the computing array share partial input in the input cache, and a partial weight, shared by each column of reconfigurable computing units, in the weight cache is computed; the input cache controller performs sparse operation on input of the input cache, and a zero value in the input is removed; and the external memory controller stores data of the computing array before and after processing. Through the sparse neural network architecture and the realization method thereof, invalid computing performed when the input is zero can be reduced and even eliminated, computed quantities among all the computing units are balanced, the hardware resource utilization rate is increased, and meanwhile shortest computing delay is guaranteed.

Description

technical field [0001] The present invention relates to neural network deep learning technology, in particular to a sparse neural network architecture and its implementation method. Background technique [0002] In recent years, excellent hardware architectures for deep learning have also emerged. For example, Nvidia dominates the current deep learning market with its large-scale parallel GPU and dedicated GPU programming framework CUDA. More and more companies have developed hardware accelerators for deep learning, such as Google's Tensor Processing Unit (TPU / Tensor Processing Unit), Intel's Xeon Phi Knight's Landing, and Qualcomm's Neural Network Processor (NNU / Neural Network Processor). Teradeep is now using FPGAs (Field Programmable Gate Arrays) because they are 10 times more energy efficient than GPUs. FPGAs are more flexible, scalable, and have higher performance-per-watt ratios. These hardware structures have good performance for dense deep neural networks, but are...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063
Inventor 尹首一李宁欧阳鹏刘雷波魏少军
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products