Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Data feature-based deep neural network self-training method

A deep neural network and data feature technology, applied in neural learning methods, biological neural network models, etc., can solve problems such as lack of wide applicability, and achieve the effects of avoiding model adjustment, high test accuracy, and high prediction accuracy.

Inactive Publication Date: 2017-05-31
UNIV OF ELECTRONICS SCI & TECH OF CHINA +1
View PDF0 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, more literatures provide recommended parameter settings for their own network structures, which do not have wide applicability.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data feature-based deep neural network self-training method
  • Data feature-based deep neural network self-training method
  • Data feature-based deep neural network self-training method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0042] In order to verify the feasibility and effectiveness of this method, the following experiments are used to specifically illustrate:

[0043] Taking two-dimensional data as an example, an image classification problem is designed for this method. When several image classification sample libraries are known, use this method to build a neural network model, and then test the classification accuracy of the model.

[0044] Select a number of different sample libraries as the sample set of the trainer, including ORL Faces face library and several sample libraries from UCI Machine Learning Repository. Candidate parameters are: network depth (the number of hidden layers) is 1, 2, 3; the number of convolution kernels is 6, 10, 12, 16, 20, 32, 64 (the number of nodes in the latter layer is more than that of the previous one Layer); the size of the convolution kernel is 3x3, 5x5, 7x7; the gradient method is stochastic gradient descent, Momentum, Adam; the initial training step size is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a data feature-based deep neural network self-training method. The method comprises the following steps of extracting corresponding standard features from prepared different sample sets, wherein the standard features represent data distribution characteristics of the sample sets, and can be used for uniquely distinguishing the different sample sets; inputting the known sample sets and the corresponding standard features to a trainer, and searching for which parameter setting is needed by sample sets with different features to reach training precision as high as possible; and when a new sample set is introduced, automatically selecting a group of optimal deep neural network parameters according to features of the new sample set, thereby ensuring that prediction precision as high as possible can be obtained when the neural network is constructed by the parameters and new samples are trained. The method has the advantages that the parameters of the deep neural network are automatically adjusted according to features of sample data by utilizing a machine learning algorithm, a proper network model is built, and it can be ensured that relatively high test precision can be achieved.

Description

Technical field [0001] The invention relates to a deep neural network self-training method based on data characteristics, and belongs to the technical field of fuzzy recognition. Background technique [0002] Machine learning is an important subject in the field of artificial intelligence. Since the 1980s, machine learning has achieved great success in algorithms, theories, and applications. Since the late 1980s, the development of machine learning has roughly experienced two waves: shallow learning and deep learning. [0003] Deep learning builds a hierarchical model structure similar to the human brain, and extracts features from the bottom to the top of the input data step by step, so that it can establish a good mapping relationship from the low-level signal to the high-level semantics. The essence of deep learning is to learn more useful features by constructing a machine learning model with many hidden layers and massive training data, thereby ultimately improving the accur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08
CPCG06N3/088
Inventor 吴磊岳翰武德安陈鹏冯江远
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products