Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

High-bandwidth memory-based neural network calculation apparatus and method

A high-bandwidth memory and neural network technology, applied in biological neural network models, calculations, instruments, etc., can solve the problems of high power consumption, difficulty in reducing the area, hindering the continuous growth of the performance of neural network computing devices, etc. The effect of improving computing performance, increasing data transmission bandwidth and transmission speed

Active Publication Date: 2018-07-03
SHANGHAI CAMBRICON INFORMATION TECH CO LTD
View PDF18 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in neural network computing devices, due to the need to consider bandwidth, performance, power consumption and area issues, GDDR4 or GDDR5 can no longer fully meet the needs of neural network computing devices, and technology development has entered a bottleneck period.
Adding 1GB per second of bandwidth will result in more power consumption, which is not a wise, efficient or cost-effective choice for designers or consumers
At the same time, GDDR4 or GDDR5 still has a serious problem that it is difficult to reduce the area
Therefore, GDDR4 or GDDR5 will gradually hinder the continuous growth of neural network computing device performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • High-bandwidth memory-based neural network calculation apparatus and method
  • High-bandwidth memory-based neural network calculation apparatus and method
  • High-bandwidth memory-based neural network calculation apparatus and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0031] As a new type of low-power memory chip, High-Bandwidth Memory (HBM) has the excellent characteristics of ultra-wide communication data path, low power consumption and small area. An embodiment of the present invention proposes a neural network computing device based on high-bandwidth memory, see figure 1 , the neural network computing device includes: a package substrate 101 (PackageSubstrate), an interposer 102 (Interposer), a logic chip 103 (Logic Die), a high bandwidth memory 104 (Stacked Memory) and a neural network accelerator 105 . in,

[0032] The packaging substrate 101 is used to carry the above-mentioned other components of the neural network computing device, and to be electrically connected to upper de...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a high-bandwidth memory-based neural network calculation apparatus and method. The neural network calculation apparatus comprises at least one high-bandwidth memory and a neuralnetwork accelerator, wherein each high-bandwidth memory comprises multiple memories accumulated in a stacked manner; and the neural network accelerator is electrically connected with the high-bandwidth memory, performs data exchange with the high-bandwidth memory, and executes neural network calculation. According to the apparatus and the method, the storage bandwidth can be greatly increased; the high-bandwidth memory serves as a memory of the neural network calculation apparatus, so that the data exchange of input data and calculation parameters can be performed between a buffer and the memory more quickly, and the IO time is greatly shortened; the high-bandwidth memory adopts a stacked structure and does not occupy a transverse plane space, so that the area of the neural network calculation apparatus can be greatly reduced and can be reduced to about 5% of the area in the prior art; and the power consumption of the neural network calculation apparatus is reduced.

Description

technical field [0001] The invention relates to the application of high-performance storage in the field of neural network computing, in particular to a high-bandwidth memory-based neural network computing device and method. Background technique [0002] At present, the field of artificial intelligence is developing rapidly, and machine learning is also affecting all aspects of people's lives. As an important part of the field of machine learning, research on neural networks is also a hot spot in both industry and academia. Due to the huge amount of data in neural network calculations, how to accelerate the execution of neural network algorithms has become an important issue that we need to solve. Therefore, a dedicated neural network computing device came into being. [0003] Most of the DRAMs used in the architecture of neural network computing devices at the present stage are GDDR4 or GDDR5. However, in neural network computing devices, due to the consideration of band...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F7/575G06N3/063
CPCG06F7/575G06N3/063
Inventor 陈天石李韦郭崎陈云霁
Owner SHANGHAI CAMBRICON INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products