Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Classifier Design Method Based on Self-Explanatory Sparse Representations in Kernel Space

A sparse representation and design method technology, applied in the field of pattern recognition, can solve the problems of large fitting errors and low accuracy of classifiers, and achieve the effect of reducing fitting errors and improving performance

Active Publication Date: 2017-05-24
CHINA UNIV OF PETROLEUM (EAST CHINA)
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0013] Aiming at the above-mentioned shortcomings of large fitting error and low accuracy in the classifier designed by the existing classifier design method, the present invention provides a classifier design method based on self-explanatory sparse representation of kernel space

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Classifier Design Method Based on Self-Explanatory Sparse Representations in Kernel Space
  • A Classifier Design Method Based on Self-Explanatory Sparse Representations in Kernel Space
  • A Classifier Design Method Based on Self-Explanatory Sparse Representations in Kernel Space

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] The present invention will be further described below in conjunction with a simulation example and in conjunction with the accompanying drawings.

[0057] A classifier design method based on sparse representation in a classification set based on kernel space, comprising the following steps:

[0058] Step 1: Design a classifier, the steps are:

[0059] (1) Read the training samples, the training samples have a total of C classes, define X=[X 1 ,X 2 ,...,X c ,...,X C ]∈R D×N Indicates the training samples, D is the face feature dimension, N is the total number of training samples, X 1 ,X 2 ,...,X c ,...,X C respectively represent the 1st, 2nd,...,c,...,C class samples, define N 1 ,N 2 ,...,N c ,...,N C Respectively represent the number of training samples of each type, then N=N 1 +N 2 +…+N c +…+N C ;

[0060] (2) Carry out two-norm normalization to the training samples to obtain normalized training samples;

[0061] (3) Take out each class in the trainin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a classifier design method based on self-explanatory sparse representation of kernel space, which comprises the following steps: reading training samples, mapping the training samples to a high-dimensional kernel space, and performing training on each type of training samples in the high-dimensional kernel space Learning, find out the contribution of each individual in this type of training sample to the construction of this type of training sample subspace (ie weight), the product of this type of training sample and the weight matrix constitutes a dictionary, and arrange all the dictionaries of the category in order to form a Large dictionary matrix; obtain the sparse coding of the test sample in the kernel space through the dictionary matrix for the test sample, fit the test sample with each type of dictionary and the sparse code corresponding to the dictionary, and calculate the fitting error; fitting error The smallest class is the class of the test sample. Compared with the prior art, the present invention combines the kernel technique and the dictionary learning method. On the one hand, the non-linear structure of the feature is considered, and the feature can be sparsely coded more accurately. On the other hand, the dictionary is trained by learning, which effectively reduce the fitting error. This greatly improves the performance of the classifier.

Description

technical field [0001] The invention belongs to the technical field of pattern recognition, and in particular relates to a classifier design method based on kernel space self-explanatory sparse representation. Background technique [0002] The pattern recognition process usually consists of two stages, the first stage is feature extraction, and the other is constructing classifier and label prediction. Classifier design (Classifier Design), as an important part of pattern recognition system, has always been one of the core issues in the field of pattern recognition research. [0003] At present, the main classifier design methods are as follows. [0004] 1. Support vector machine method (English: SupportVector Machine) [0005] The support vector machine method was first proposed by Corinna Cortes and Vapnik in 1995, and it aims to establish the optimal classification surface by maximizing the category interval. This type of method shows many unique advantages in solving ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/24G06F18/214
Inventor 刘宝弟王立韩丽莎王延江
Owner CHINA UNIV OF PETROLEUM (EAST CHINA)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products