Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolutional neural network architecture search method based on differentiable sampler and progressive learning

A technology of convolutional neural network and search method, which is applied in the field of convolutional neural network architecture search based on differentiable samplers and progressive learning, which can solve the problems of limited discretization error of supernetwork and loss of supernetwork, and achieve optimal probability distribution. function, the effect of reducing discretization error

Pending Publication Date: 2022-03-25
GUANGDONG UNIV OF TECH
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the scheme increases the entropy function, the optimization is still the loss of the supernetwork on the verification set; and its architecture parameters represent the rigid attention weight of the supernetwork neural operator, and there is no sampling link in the optimization process; therefore The effect of this scheme on reducing the discretization error of the super-network is still relatively limited, and the existing technology still has certain limitations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network architecture search method based on differentiable sampler and progressive learning
  • Convolutional neural network architecture search method based on differentiable sampler and progressive learning
  • Convolutional neural network architecture search method based on differentiable sampler and progressive learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046] Please refer to figure 1 , a convolutional neural network architecture search method based on differentiable samplers and progressive learning, including the following steps:

[0047] S1, constructing a hypernetwork, architecture parameters of the hypernetwork, and a differentiable sampler;

[0048] S2, in a progressive learning manner, according to the architecture parameters, using the differentiable sampler to perform sampling optimization on the hypernetwork to obtain a desired network;

[0049] S3. Perform network retraining on the desired network until the desired network converges.

[0050] Compared with the prior art, the present invention directly samples and optimizes the constructed hypernetwork by using a differentiable sampler, which can change the optimization goal of the architecture search from optimizing the supernetwork to finding the optimal probability distribution function, minimizing the The expectation of the loss function of the network under t...

Embodiment 2

[0093] This embodiment is a further description made on the basis of Embodiment 1 in conjunction with specific parameter settings and with more specific examples, wherein:

[0094] In this embodiment, an architecture search is performed on the CIFAR-10 data set. Specifically, the CIFAR-10 dataset contains 60,000 pictures, of which 50,000 are training sets and 10,000 are verification sets. The picture resolution is 32×32 and the number of channels is 3.

[0095] For the CIFAR-10 data set: In this embodiment, the number of channels of the three-stage hypernetwork is predefined as [9,13,19]. ,5,3].

[0096] The forms of candidate neural operators include: 3×3 depthwise separable convolution, 5×5 depthwise separable convolution, 3×3 hole convolution, 5×5 hole convolution, 3×3 maximum pooling layer, 3× 3 average pooling layer, residual connection, zero operation.

[0097] The Dropouts of the residual connections in the three stages are [0, 0.4, 0.7] respectively.

[0098] The f...

Embodiment 3

[0107] A convolutional neural network architecture search system based on differentiable samplers and progressive learning, please refer to Figure 6, including a building block 1, a sampling optimization module 2 and a retraining module 3, the sampling optimization module 2 is respectively connected to the construction Module 1 and retraining module 3, in which:

[0108] The building block 1 is used to construct a hypernetwork, architectural parameters of the hypernetwork and a differentiable sampler;

[0109] The sampling optimization module 2 is used for progressive learning, according to the architecture parameters, using the differentiable sampler to perform sampling optimization on the hypernetwork to obtain a desired network;

[0110] The retraining module 3 is used to retrain the desired network until the desired network converges.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Aiming at the limitation of the prior art, the invention provides a convolutional neural network architecture searching method based on a differentiable sampler and progressive learning, and the method comprises the steps: directly carrying out the sampling optimization of a constructed super network through employing the differentiable sampler; an optimization target of architecture search can be converted from optimization of a super network to search for an optimal probability distribution function, and the expectation of a loss function of a sub-network under the probability distribution is minimized; according to the method, the probability distribution function of the sub-network can be optimized by evaluating the performance of the sub-network, so that discretization errors are reduced. Meanwhile, due to the fact that a progressive learning strategy is adopted, stable search can be carried out in the search space which is increased in an index level, and therefore a more complex convolutional neural network architecture can be obtained.

Description

technical field [0001] The present invention relates to the technical field of artificial intelligence and deep learning, in particular to neural network architecture search technology, and more specifically, to a convolutional neural network architecture search method based on differentiable samplers and progressive learning. Background technique [0002] In recent years, deep learning has developed rapidly. Using the powerful feature representation capabilities of neural networks, deep learning has made breakthroughs in image recognition and other multi-modal tasks. Among them, the architecture design of the neural network plays a crucial role in its performance. However, with the continuous development of deep learning, the number of layers of the neural network continues to deepen, and the neural operators become more and more complex, resulting in an increasingly complex structure of the neural network, and more and more hyperparameters need to be manually adjusted, wh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08G06F17/13G06K9/62G06V10/764G06V10/82
CPCG06N3/08G06F17/13G06N3/047G06N3/045G06F18/2415
Inventor 刘德荣饶煊王永华赵博李佳鑫
Owner GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products