Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

FPGA-based deep convolution neural network realizing method

A technology of deep convolution and implementation method, applied in biological neural network model, physical implementation, speech analysis, etc., can solve the problems of low efficiency and high power consumption, achieve high efficiency, low power consumption, and improve algorithm efficiency

Active Publication Date: 2016-12-14
FUDAN UNIV
View PDF9 Cites 159 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0015] The purpose of the method of the present invention is to provide a method for realizing a deep convolutional neural network model with high efficiency and low power consumption, so as to solve the problems of high power consumption and low efficiency of current deep learning models based on GPU or CPU

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • FPGA-based deep convolution neural network realizing method
  • FPGA-based deep convolution neural network realizing method
  • FPGA-based deep convolution neural network realizing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0073] The method of the present invention is explained below in conjunction with the accompanying drawings, and the specific implementation of the handwritten character recognition algorithm is realized on the FPGA hardware platform using a deep convolutional neural network model. (The deep convolutional neural network model consists of the input layer I, the first convolutional layer C1, the first downsampling layer S1, the second convolutional layer C2, the second downsampling layer S2 and the full connection layer Softmax The input image size is 28×28, the first convolutional layer contains a convolution kernel with a size of 5×5, and the second convolutional layer contains three convolution kernels with a size of 5×5).

[0074] The specific operation steps of the handwritten character recognition algorithm implemented on the FPGA using the deep convolutional neural network model are as follows: figure 1 shown.

[0075] 1. Load the trained model parameters

[0076] First...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of digital image processing and mode identification, and specifically relates to an FPGA-based deep convolution neural network realizing method. The hardware platform for realizing the method is XilinxZYNQ-7030 programmable sheet SoC, and an FPGA and an ARM Cortex A9 processor are built in the hardware platform. Trained network model parameters are loaded to an FPGA end, pretreatment for input data is conducted at an ARM end, and the result is transmitted to the FPGA end. Convolution calculation and down-sampling of a deep convolution neural network are realized at the FPGA end to form data characteristic vectors and transmit the data characteristic vectors to the ARM end, thus completing characteristic classification calculation. Rapid parallel processing and extremely low-power high-performance calculation characteristics of FPGA are utilized to realize convolution calculation which has the highest complexity in a deep convolution neural network model. The algorithm efficiency is greatly improved, and the power consumption is reduced while ensuring algorithm correct rate.

Description

technical field [0001] The invention belongs to the technical fields of digital image processing and pattern recognition, and in particular relates to a method for realizing a deep convolutional neural network model on an FPGA hardware platform. Background technique [0002] With the current rapid development of computer technology and the Internet, the scale of data is growing explosively, and the intelligent analysis and processing of massive data has become the key to effectively utilizing the value of data. Artificial intelligence technology is an effective means to discover valuable information from massive data. In recent years, breakthroughs have been made in computer vision, speech recognition and natural language processing and other application fields. The deep learning algorithm model based on deep convolutional neural network is a typical representative. [0003] Convolutional Neural Networks (CNNs) are inspired by neuroscience research. After more than 20 year...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063G06K9/62G10L15/16
CPCG06N3/063G10L15/16G06F18/24
Inventor 王展雄周光朕冯瑞
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products