Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data distribution characteristic-based order-preserving learning machine

A technology of data distribution and learning machine, applied in the field of order-preserving learning machine, can solve the problems that the classification performance cannot be further improved, the relative relationship of various samples is ignored, and the characteristics of data distribution are not considered.

Inactive Publication Date: 2018-09-21
PANZHIHUA UNIV
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These methods have achieved good classification results in practical applications, but still face the following challenges: (1) The classification process does not consider the distribution characteristics of the data, which cannot further improve the classification performance; (2) The classification results ignore the relative characteristics of various samples. relation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data distribution characteristic-based order-preserving learning machine
  • Data distribution characteristic-based order-preserving learning machine
  • Data distribution characteristic-based order-preserving learning machine

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0088] Artificially generate five types of Gaussian distribution data sets, 40 samples of each type, and the center points of each type are (0,0), (6,6), (12,12), (18,18), (24,24 ), and the standard deviation is set to 2. Generate datasets such as figure 2 As shown, the direction vector obtained by RPLM-DDF is W, and the generated data is projected to W to obtain image 3 From the experimental results shown, it can be seen that the RPLM-DDF of the example has good separability and keeps the relative order of the samples unchanged.

Embodiment 2

[0090] The experimental data set uses the Iris standard data set. The dataset consists of 3 different types of irises, with a total of 150 samples. Each sample consists of five attributes: sepal length, sepal width, petal length, petal width, and type. Randomly select 60% of the Iris dataset as the training set, and the remaining 40% as the test set.

[0091] The effectiveness of RPLM-DDF is verified by comparative experiments with SVC (Support Vector Classification) and Naive Bayesian. The kernel function of RPLM-DDF algorithm adopts Gaussian kernel function. Use the grid search method to get the experimental parameters, ν is selected in {0.1,0.5,1,3,5,10}, σ is in The optimal experimental parameter is ν=0.5, SVC uses a Gaussian kernel function with a penalty factor C of 1. The Naive Bayesian algorithm does not set the prior probability, the experimental results are as follows Figure 4 shown. Depend on Figure 4 It can be seen that compared with SVC and Naive Bayes...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of machine learning and discloses a data distribution characteristic-based order-preserving learning machine. The relative order of samples is maintained to be unchanged during classification. The distribution characteristic of data is represented by introducing within-class scatter in linear discriminant analysis, and the relative relationship of the samples areguaranteed to be taken into consideration in the classification process by increasing the limitation of the relative relationship of various sample centers in the constraint condition of the optimalproblem. The data distribution characteristic-based order-preserving learning machine is suitable for mode classification.

Description

technical field [0001] The invention relates to the field of machine learning, in particular to an order-preserving learning machine based on data distribution characteristics. Background technique [0002] Pattern classification is one of the research hotspots in machine learning, pattern recognition, data mining and other fields. Common classification methods are: decision tree, association rules, naive Bayesian, support vector machine, etc. These methods have achieved good classification results in practical applications, but still face the following challenges: (1) The classification process does not consider the distribution characteristics of the data, which cannot further improve the classification performance; (2) The classification results ignore the relative characteristics of various samples. relation. Contents of the invention [0003] The technical problem to be solved by the present invention is to provide an order-preserving learning machine based on data ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N99/00
Inventor 刘忠宝张靖周方晓秦振涛罗学刚
Owner PANZHIHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products