Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Augmented neural network configuration, training method therefor, and computer readable storage medium

A training method and neural network technology, applied in the field of deep learning, can solve the problems of the weight change of the original convolutional neural network, the decline of recognition ability, etc., and achieve the effect of good learning ability and recognition ability

Inactive Publication Date: 2017-10-17
深圳市丰巨泰科电子有限公司
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For a trained convolutional neural network, if it is required to have the ability to identify new samples, if the original convolutional neural network is still used to train these new samples, it is very likely that the original convolutional neural network will The weight of the net changes, making it less able to identify the original sample

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Augmented neural network configuration, training method therefor, and computer readable storage medium
  • Augmented neural network configuration, training method therefor, and computer readable storage medium
  • Augmented neural network configuration, training method therefor, and computer readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0018] This embodiment provides an augmented convolutional neural network architecture and a training method thereof. The augmented convolutional neural network architecture includes a first convolutional neural network model CNN1 and a second convolutional neural network model CNN2 connected in parallel.

[0019] The first convolutional neural network model CNN1 is a neural network architecture that has been trained by training the original samples, such as figure 1 As shown, x 0 is a sample in the original sample set, which can be input to the first convolutional neural network model CNN1 to obtain the ideal output result y 0 . Therefore, the first convolutional neural network model CNN1 can well recognize images belonging to the original sample set type.

[0020] For example, the first convolutional neural network model CNN1 can use AlexNet, which can recognize more than 1,000 types of nautical objects in the original sample set ImageNet. Increase the recognition of more...

Embodiment 2

[0032]The difference between this embodiment and Embodiment 1 is that the first convolutional neural network model CNN1 in Embodiment 1 is a single convolutional neural network, and this embodiment uses the first convolutional neural network model architecture to replace the first convolutional neural network Network model CNN1, the first convolutional neural network model architecture is the architecture of two or more convolutional neural networks in parallel, for example, the first convolutional neural network model architecture of this embodiment includes a convolutional neural network Network model CNN1' and attached two convolutional neural network models CNN1", among them, the attached convolutional neural network model CNN1' can recognize more than 1,000 types of navigation objects in the original sample set ImageNet, and attached two convolutional neural network models CNN1" It can identify more than 3,000 types of daily objects in the original sample set ImageNet. In ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides an augmented neural network architecture and its training method, and a computer-readable storage medium. The augmented neural network architecture includes a first neural network model and a second neural network model. The first neural network model has been trained using samples. The model of the second neural network model is a model without sample training, and the input terminals of the first neural network model and the second neural network model are connected. The training method includes the following process: input new samples, the first neural network model and the second neural network model The neural network model outputs the first result and the second result respectively based on the new sample; the expected result minus the first result is used as the target value of the second neural network model; and the second neural network model is trained based on the target value. The present invention not only guarantees the ability to effectively identify both original samples and new samples, but also expands the range of samples that can be recognized by the augmented neural network architecture as a whole.

Description

technical field [0001] The present invention relates to the field of deep learning (Deep Learning), in particular to an augmented neural network architecture, a training method thereof, and a computer-readable storage medium. Background technique [0002] In the research of deep learning, the prior art has developed many well-trained convolutional neural network (Convolutional Neural Network, CNN) models such as LeNet, AlexNet, VGGNet, GoogleNet, ResNet and feedforward neural network (FNN, feed forward neural network) model, these models often need to spend a lot of training samples, a lot of time to train repeatedly, optimize and debug to be perfect. [0003] For example, each convolution layer of a convolutional neural network is formed by using a convolution kernel to correspond to the matrix elements at the corresponding positions of all (or part) output matrices (also called feature maps, feature maps) of the previous layer. Multiply, accumulate, add bias, and finally ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08
CPCG06N3/08
Inventor 赵勇向函符祖峰谢锋陈胜红
Owner 深圳市丰巨泰科电子有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products