Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

General miniaturization method of deep neural network

A deep neural network and small-scale network technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as performance degradation, achieve good stability and effectiveness, reduce network calculations, and reduce storage effect of space

Inactive Publication Date: 2018-03-02
睿魔创新科技(深圳)有限公司 +1
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] 3. Compact network: Designing a more compact network structure is actually to reduce the parameters of each layer of the network. However, even if there is a large amount of training data using this method, when the model is small to a certain extent, it will also bring performance degradation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • General miniaturization method of deep neural network
  • General miniaturization method of deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] In order to further understand the features, technical means, and specific objectives and functions achieved by the present invention, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0020] The present invention discloses a general miniaturization method of a deep neural network, which can be widely applied to deep neural network models deployed on embedded and mobile computing platforms, specifically comprising the following steps:

[0021] Perform feature reconstruction on the initial deep neural network to form a new small network. Reconstruct the initial large and complex network, and use the reconstruction method to compress the deep neural network, so that the original deep neural network can be effectively simplified.

[0022] The number of feature channels and the number of group convolutions of the deep neural network are compressed to reduce the number of output feature c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a general miniaturization method of a deep neural network. The method comprises the following steps: performing feature reconstruction on an initial deep neural network to forma new small network; compressing the number of feature channels and the number of group convolutions of the deep neural network to reduce the number of output feature channels and the number of groupconvolutions, wherein the feature reconstruction uses a feature layering approach, a predetermined compression ratio is set, and the low-level features of the deep neural network are first reconstructed and then the high-level features of the deep neural network are reconstructed at the same predetermined compression ratio. The general miniaturization method reduces storage space, simplifies thenetwork, maintains good stability and effectiveness, and is widely applied to deep neural network models deployed on embedded and mobile computing platforms.

Description

technical field [0001] The invention belongs to the technical field of deep learning, in particular to a notification miniaturization method of a deep neural network. Background technique [0002] In recent years, the rapid development of deep learning has made leapfrog progress in the performance of algorithms in a series of fields such as computer vision and natural language processing. Deep learning algorithms have been widely used in academia, but they have not been widely used in industry. One of the reasons is that the model of deep learning network is huge and the amount of calculation is huge. The weight file of a convolutional neural network is often hundreds of megabytes. , In a scientific research environment, a huge GPU cluster can be used to increase the training speed and reduce the running time, but if it is to be put into an actual product, the capacity of hundreds of megabytes is unacceptable. At present, the performance of smartphones is already very super...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/082G06N3/045
Inventor 董健张明黄龙王禹
Owner 睿魔创新科技(深圳)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products