Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network acceleration device and method and communication equipment

A neural network and acceleration device technology, applied in the field of data processing, can solve the problems of wasting on-chip storage resources, multi-reading frequency, ineffective reuse, etc., and achieve the effects of reducing memory access, optimizing computing speed, and reducing cache requirements.

Pending Publication Date: 2021-12-17
绍兴埃瓦科技有限公司 +1
View PDF7 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional convolution acceleration operation device needs to use the img2col method to expand the input feature map data and convolution kernel data in matrix form according to the convolution kernel size and step size parameters, and then perform operations on the expanded matrix, so that it can be calculated according to the matrix The multiplication operation rule performs convolution acceleration, but this method requires a larger on-chip cache after the feature data matrix is ​​expanded, and similarly requires more off-chip main memory reading frequency and cannot efficiently multiplex the read data. It needs to occupy the read and write bandwidth of the off-chip memory and increase the hardware power consumption. At the same time, the convolution acceleration method based on the img2col expansion method is not conducive to the hardware logic circuit implementation of the convolution operation of different sizes of convolution kernels and step sizes. Therefore, In the process of convolution network operation, each input channel needs to perform convolution matrix operation with multiple convolution kernels, and the feature map data needs to be acquired multiple times; and all feature map data on each channel are all cached in the buffer , the amount of data is not only huge, but also when the convolution matrix is ​​calculated, since the size of the feature data after matrix conversion is far greater than the size of the original feature data, not only will the on-chip storage resources be wasted, so it is impossible to perform large-scale operations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network acceleration device and method and communication equipment
  • Neural network acceleration device and method and communication equipment
  • Neural network acceleration device and method and communication equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] Embodiments of the present application will be described in detail below in conjunction with the accompanying drawings.

[0027] Embodiments of the present application are described below through specific examples, and those skilled in the art can easily understand other advantages and effects of the present application from the content disclosed in this specification. Apparently, the described embodiments are only some of the embodiments of this application, not all of them. The present application can also be implemented or applied through other different specific implementation modes, and various modifications or changes can be made to the details in this specification based on different viewpoints and applications without departing from the spirit of the present application. It should be noted that, in the case of no conflict, the following embodiments and features in the embodiments can be combined with each other. Based on the embodiments in this application, all...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a neural network acceleration device and method and communication equipment, and belongs to the field of data processing, and the method specifically comprises the steps that a main memory receives and stores feature map data and weight data of a to-be-processed image; a main controller generates configuration information and an operation instruction according to the structure parameters of the neural network; a data caching module comprises a feature data caching unit for caching feature line data extracted from the feature map data and a convolution kernel caching unit for caching convolution kernel data extracted from the weight data; a data controller adjusts a data path according to the configuration information and the instruction information and controls the data flow extracted by a data extractor to be loaded to a corresponding neural network calculation unit, the neural network calculation unit at least completes convolution operation of one convolution kernel and feature map data and completes accumulation of multiple convolution results in at least one period, and therefore, circuit reconstruction and data multiplexing are realized; and an accumulator accumulates the convolution results and outputs output feature map data corresponding to a convolution core.

Description

technical field [0001] The invention relates to the field of data processing, in particular to a neural network acceleration device, method and communication equipment. Background technique [0002] A convolutional neural network consists of an input layer (input layer), any number of hidden layers (hidden layers) as intermediate layers, and an output layer (output layer). The input layer (inputlayer) has multiple input nodes (neurons). The output layer has output nodes (neurons) that recognize the number of objects. [0003] The convolution kernel is a small window set in the hidden layer, which holds the weight parameters. The convolution kernel slides sequentially on the input image according to the step size, and performs multiplication and addition operations with the input feature image of the corresponding area, that is, the weight parameter in the convolution kernel and the value of the corresponding input image are first multiplied and then summed. The traditiona...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06N3/04
CPCG06N3/063G06N3/045Y02D10/00
Inventor 王赟张官兴郭蔚黄康莹张铁亮
Owner 绍兴埃瓦科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products