Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-computing-unit coarse-grained reconfigurable system and method for recurrent neural network

A recursive neural network and computing unit technology, applied in the field of embedded reconfigurable systems, can solve problems such as increased workload of parallel program process management, high GPU computing power consumption, complex program code, etc.

Active Publication Date: 2017-05-31
NANJING UNIV OF TECH
View PDF3 Cites 33 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

GPU computing has extremely high parallelism, but the program code required for GPU parallel computing is more complicated, and the process management of parallel programs also increases the workload, and GPU computing requires high power consumption; FPGA is flexible and improves digital circuit performance. Reusable power consumption is high, and the power consumption is also large; ASIC power consumption and area overhead are the smallest, and the calculation speed is the fastest, but the cost of casting is high, and it takes a long time from design to use

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-computing-unit coarse-grained reconfigurable system and method for recurrent neural network
  • Multi-computing-unit coarse-grained reconfigurable system and method for recurrent neural network
  • Multi-computing-unit coarse-grained reconfigurable system and method for recurrent neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.

[0025] Such as figure 1 As shown, the multi-computing unit coarse-grained reconfigurable system oriented to the recurrent neural network LSTM obtains the data of the external memory through the on-chip shared storage unit, and the on-chip configuration information storage and reconstruction controller controls the on-chip computing array through the configuration bus, and each computing array Data can be exchanged through the data exchange storage unit; including on-chip shared storage unit, data ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-computing-unit coarse-grained reconfigurable system and method for a recurrent neural network LSTM (long short-term memory). The system comprises multi-matrix-product bias and calculation arrays, activation fitting calculation arrays and vector quantity calculation arrays. The multi-matrix-product bias and calculation arrays are used for realizing calculation and accumulation operation of multiple matrix vector products in the recurrent neural network and perform addition bias calculation under control of control signals, and output values are outputted through corresponding output cache units. The activation fitting calculation arrays are used for realizing a piecewise linear fitting calculation function of activation functions in the recurrent neural network LSTM, activation fitting calculation units are controlled by the control signals to perform corresponding activation function piecewise linear fitting calculation when input values enter input cache units, and output values are outputted through corresponding output cache units. The vector quantity calculation arrays are used for realizing dimension-based vector multiplication and vector addition calculation, and after multiplication units finish calculation, data are transmitted to vector addition units or directly outputted under control of the control signals. Parallelism degree, calculation speed and array utilization rate of the reconfigurable system are increased.

Description

technical field [0001] The invention relates to a multi-computing unit coarse-grained reconfigurable system and method oriented to a recursive neural network LSTM, belonging to the field of embedded reconfigurable systems. Background technique [0002] In recent years, with the development of microelectronics technology and computer technology, especially the emergence of large-scale high-performance programmable devices, the improvement of software and hardware design methods and design tools, real-time circuit reconfiguration technology has gradually become an international computing system research. A new hotspot in . Its appearance has blurred the boundary between hardware and software in the traditional sense in the past, and made the hardware system soft. The essence of real-time circuit reconfiguration is to use the characteristics of programmable devices that can repeatedly configure the logic state, and dynamically change the circuit structure of the system accordi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/38G06N3/06
CPCG06F9/383G06N3/06
Inventor 王琛徐新艳
Owner NANJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products