Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Sparse GRU neural network acceleration implementation method and device

A technology of neural network and implementation method, which is applied in the field of neural network, can solve the problems of less GRU neural network support, difficulty in achieving a high degree of parallelism, timing dependence, etc., and achieve the effect of reducing DSP consumption, reducing memory consumption, and avoiding insufficient memory

Pending Publication Date: 2021-06-04
SHANGHAI UNIV
View PDF16 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Usually researchers use GPU to accelerate the GRU neural network, but due to the timing-dependent nature of the GRU neural network, it is difficult to achieve a high degree of parallelism
Most of the existing FPGA neural network accelerators are designed for convolutional neural networks or fully connected neural networks, and have less support for GRU neural networks
At the same time, most deep neural networks have the characteristics of many parameters and a large amount of calculation. How to use FPGA with limited resources to accelerate is still a difficult point.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sparse GRU neural network acceleration implementation method and device
  • Sparse GRU neural network acceleration implementation method and device
  • Sparse GRU neural network acceleration implementation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0039] Such as figure 1 As shown, the embodiment of the present invention proposes a method for implementing sparse GRU neural network acceleration, including the following steps:

[0040] S1. Use CPU or GPU to train the GRU neural network model, and prune and quantize the trained model parameters; use the triplet method to store the sparse parameter matrix, and quantize the input of the model;

[0041] S2, using the buffer to transfer the triplet of stored mod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a sparse GRU neural network acceleration implementation method and device, and the method comprises the following steps: S1, training a GRU neural network model through employing a CPU or GPU, and carrying out the pruning and quantification of the trained model parameters; storing the sparse parameter matrix by using a triple method, and quantifying the input of the model; S2, transmitting the triple for storing the model parameters and the quantized input to an external memory of an FPGA by using a buffer; and S3, realizing calculation of the sparse GRU neural network in the FPGA, and transmitting a final result to the external memory. According to the sparse GRU neural network acceleration implementation method and device, the calculation efficiency can be improved, the input transmission time and the data transmission frequency are reduced, and then power consumption and time delay are reduced.

Description

technical field [0001] The invention relates to the technical field of neural networks, in particular to a method and device for realizing acceleration of a sparse GRU neural network. Background technique [0002] In recent years, the rise of deep learning has continuously promoted the development of artificial intelligence. As an important tool for deep learning, the deep neural network guarantees the fitting ability of the model by increasing the number of layers of the model and increasing the amount of training data, which brings about an explosive growth of model weight parameters and calculation volume. In order to improve the performance of neural networks, heterogeneous computing solutions are often used. At present, most researchers use GPUs to accelerate deep neural networks. Graphics processing units (GPUs) are used for computationally intensive tasks and have the characteristics of high bandwidth and high parallelism, but they have high power consumption. The p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/04G06N3/063
CPCG06N3/082G06N3/04G06N3/063Y02D10/00
Inventor 龙湘蒙支小莉童维勤张庆杰
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products