Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Pipelined Acceleration System of FPGA-Based Deep Convolutional Neural Network

A neural network and deep convolution technology, applied in the field of neural network computing, can solve the problems of large data volume, high depth of deep convolutional neural network models, and limited real-time input costs

Active Publication Date: 2019-09-20
武汉魅瞳科技有限公司
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The deep convolutional neural network model has the characteristics of high model depth, complex hierarchy, large data magnitude, high parallelism, intensive calculation and storage intensive, etc., and a large number of convolution operations and pooling operations often make it a part of the application process. Large computing bottlenecks and the storage of a large number of intermediate results also put forward higher requirements on the computer storage structure, which is very unfavorable for application scenarios with strong real-time performance and limited investment costs
[0004] The two commonly used accelerators are CPU and GPU. The CPU cannot ideally meet the requirements in terms of computing performance based on its serial execution structural characteristics. Although the GPU has obvious advantages in computing performance, it cannot break through the power consumption just like the CPU. barriers, and both CPU and GPU have serious limitations in scalability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Pipelined Acceleration System of FPGA-Based Deep Convolutional Neural Network
  • A Pipelined Acceleration System of FPGA-Based Deep Convolutional Neural Network
  • A Pipelined Acceleration System of FPGA-Based Deep Convolutional Neural Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0060] The deep convolutional neural network model as a specific embodiment has the following characteristics:

[0061] (1) All calculation layers (computation layers include the initial input image layer, convolutional layer, pooling layer and fully connected layer) have the same length and width of the single feature map, and the length and width of the calculation windows of all calculation layers are the same.

[0062] (2) The connection methods of each calculation layer are: initial input image layer, convolutional layer 1, pooling layer 1, convolutional layer 2, pooling layer 2, convolutional layer 3, pooling layer 3, full connection Layer 1 and fully connected layer 2.

[0063] (3) Th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention brings forward a streamlined acceleration system of an FPGA-based depth convolution neural network. The streamlined acceleration system is mainly formed by an input data distribution control module, an output data distribution control module, a convolution calculating sequence serialization realizing module, a convolution calculating module, a pooling calculating sequence serialization realizing module, a pooling calculating module, and a convolution calculating result distribution control module. Moreover, the streamlined acceleration system comprises an internal system cascade interface. Through the streamlined acceleration system designed by the invention, highly efficient parallel streamlined realization can be conducted on an FPGA, problems of resource waste and effective calculation delays caused by filling operations during calculation are effectively solved, the power consumption of the system is effectively reduced, and the operation processing speed is greatly increased.

Description

technical field [0001] The invention belongs to the field of neural network calculation, and in particular relates to a pipeline acceleration system based on an FPGA-based deep convolutional neural network. Background technique [0002] With the new wave of machine learning brought about by deep learning, deep convolutional neural networks have been widely used in different large-scale machine learning problems such as speech recognition, image recognition, and natural speech processing, and have achieved a series of breakthrough research results , its powerful feature learning and classification ability has attracted widespread attention, and has important analysis and research value. [0003] The deep convolutional neural network model has the characteristics of high model depth, complex hierarchy, large data magnitude, high parallelism, intensive calculation and storage intensive, etc., and a large number of convolution operations and pooling operations often make it a pa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/063G06N3/08
Inventor 李开邹复好章国良黄浩杨帆孙浩
Owner 武汉魅瞳科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products