Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A CNN iterative training method of an artificial intelligence framework

An artificial intelligence and training method technology, applied in the field of video recognition, can solve the problems of complex iterative training and low degree of automation, and achieve the effect of simple principle and low degree of automation.

Inactive Publication Date: 2019-02-12
成都网阔信息技术股份有限公司
View PDF6 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to: provide a kind of artificial intelligence framework to carry out CNN iterative training method, solve the existing CNN network training stage automation degree is low, the traditional iterative training is more complicated problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A CNN iterative training method of an artificial intelligence framework
  • A CNN iterative training method of an artificial intelligence framework

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0033] A kind of artificial intelligence frame carries out CNN iterative training method, comprises sample set, is characterized in that: comprise the following steps:

[0034] S1, construct CNN network structure, described CNN network structure is training network, and this training network comprises three convolutional layers, two pooling layers and an output layer; Wherein:

[0035] The first layer is a convolutional layer, using the Relu activation function, and the convolution method is convolution with edges;

[0036] The second layer is the pooling layer, which is pooled according to the maximum value;

[0037] The third layer is the convolutional layer, using the Relu activation function, and the convolution method is convolution with edges;

[0038] The fourth layer is the convolutional layer, using the Relu activation function, and the convolution method is convolution with edges;

[0039] The fifth layer is the pooling layer, using the Relu activation function;

...

Embodiment 2

[0046] The difference between this embodiment and Embodiment 1 is that further, the sample set is a video image sample set, and the video image sample set is obtained by intercepting key frames of video samples.

[0047] Further, it also includes classifying the video image sample set. The sample images are divided into two categories: images that include objects and images that do not include objects.

[0048] Further, it also includes extracting a test set and a training set from the video sample set according to a preset size.

[0049]Further, the step S2 also includes a method of transforming the images in the training set. The method of image transformation includes horizontal translation, vertical translation, rotation, changing image contrast, changing brightness, setting the range of the blurred area and the degree of blurring and Adjust the amount of noise and the amount of transformations with detailed control for each type of transformation. The image transformati...

Embodiment 3

[0052] Such as figure 1 , 2 As shown, the specific steps of the detailed training process of fuzzy video recognition are as follows:

[0053] First obtain the vehicle monitoring video sample. The sample data in this embodiment is generated based on the in-vehicle video of a passenger and truck on a third-party monitoring platform, and the sample resolution of the acquired monitoring video is generally 352*288. Of course, it can also be obtained in other ways, and the resolution is not limited to 352*288. Then keyframe capture is performed on the video. After obtaining a large number of vehicle surveillance videos, key frame interception is performed on these videos. A video is composed of many frames of images, so analyzing the blurring degree of a video can be replaced by analyzing the blurring degree of a certain frame image to a certain extent. This method uses the OpenCV graphics library for image capture. The length of the selected video is about 60s, and each video...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a CNN iterative training method of an artificial intelligence framework, which can also be called on-hook training in order to realize intelligent training. The CNN training framework adopts the algorithm of dynamic learning rate and the algorithm of automatic judgement convergence. With the development of training rounds, the learning rate will be adjusted dynamically according to the gradient changes in the reverse gradient algorithm, and is gradually reduced to the preset value. If the gradient change is less than the threshold value in a certain time, the system will stop training and mark the completion of training. After the training, the training data set can be expanded conveniently by classifying and identifying the unknown sample set with the network fileobtained by the test program, and then correcting the training data set with a little artificial rectification, and the iterative training can be carried out. Finally, the accuracy of network classification can reach 99.8%.

Description

technical field [0001] The invention relates to the field of video recognition, in particular to an artificial intelligence framework for CNN iterative training method. Background technique [0002] Convolutional Neural Network (CNN) is a feedforward neural network. Its artificial neurons can respond to surrounding units within a part of the coverage area, and it has excellent performance for large-scale image processing. It includes convolutional layers and pooling layers. [0003] Generally, the basic structure of CNN includes two layers, one is the feature extraction layer, the input of each neuron is connected to the local receptive field of the previous layer, and the local features are extracted. Once the local feature is extracted, the positional relationship between it and other features is also determined; the second is the feature map layer, each calculation layer of the network is composed of multiple feature maps, each feature map is a plane, All neurons on the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06N3/045G06F18/214
Inventor 刘宏基
Owner 成都网阔信息技术股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products