Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Method for Recognition of EEG Maps Based on Deep Convolutional Neural Networks

A neural network recognition and deep convolution technology, applied in the field of feature dimensionality reduction and classification, can solve problems affecting classification results, loss of lead electrode position information, poor network fitting ability, etc., to improve feature expression and recognition Accuracy, the effect of enhancing the fitting ability

Active Publication Date: 2020-11-27
BEIJING UNIV OF TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] (1) When the frequency-domain feature extraction method is used to image the EEG signal, in the feature extraction stage, FFT is performed on each MI-EEG, and the spectrum of each frequency point is taken as the signal feature or the square operation of the modulus is used as the signal feature, which cannot fully represent the signal Numerical differences in power characteristics;
[0006] (2) EEG utilizes the frequency-domain or spatial-domain features of MI-EEG, but the time-frequency features have not been effectively reflected. In addition, several leads or multiple sub-band features are stacked in disorder to form an imaging matrix, which makes the original BCI The position information of the lead electrodes included in the acquisition system is lost, which will adversely affect the identification;
[0007] (3) The number of layers of the convolutional neural network and the number of neurons in each layer of convolution are too small, which makes the network fitting ability poor, the generalization performance is not strong, and it is not conducive to the multi-dimensional deep extraction of signal features; After the product operation, using the maximum pooling operation for data dimensionality reduction will discard 75% of the pixels, so that too much information will be lost when processing high-order feature maps, which will affect the classification results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Method for Recognition of EEG Maps Based on Deep Convolutional Neural Networks
  • A Method for Recognition of EEG Maps Based on Deep Convolutional Neural Networks
  • A Method for Recognition of EEG Maps Based on Deep Convolutional Neural Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0076] The concrete experiment of the present invention is carried out in the Tensorflow framework under the Ubuntu (64-bit) operating system, and the convolutional neural network training is completed on the NVIDIA GTX1080Ti graphics card.

[0077] The MI-EEG data set used in the present invention comes from the public database of the BCI 2000 acquisition system, and is collected by developers using the 64-lead electrode caps of the international standard 10-10 lead BCI 2000 system. The sampling frequency is 160Hz. The distribution of electrodes on the scalp layer is as follows: Figure 1 shown.

[0078] Each experiment lasts 5 s. 0-1s is the period of resting state, a cross cursor appears on the screen, and at the same time t=0s, a short alarm sound is issued; 1s-4s is the period of motor imagery, a prompt cursor appears above or below the screen, if the cursor is above, then The subjects imagined moving their hands; if the cursor appeared below, the subjects imagined mov...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for identifying an electroencephalogram image based on a deep convolutional neural network. The method comprises the following steps: carrying out baseline eliminationpreprocessing on a collected motor imagery electroencephalogram signal; Dividing each lead signal into a plurality of time windows, and carrying out fast Fourier Transform on each window MI-EEG model, carrying out fast Fourier inverse transform on the EEG models respectively, and calculating corresponding time domain power values of the EEG models; Calculating a mean value of the time domain power values obtained by each window to obtain time domain power characteristics; Performing interpolation imaging on the extracted three-frequency-band power characteristics in a data matrix to obtain apseudo RGB image of the MI-EEG signal; Designing the DCNN model into five segments of convolution, and after each segment of convolution is finished, replacing a maximum pooling layer with a convolution layer to carry out data dimension reduction; And performing evaluation on the test set by using the trained DCNN model to complete a classification test. MI-EEEG images have the advantages in the aspect of feature expression, and are matched with 30 layers of DCNN with higher model fitting capability, which has great significance for the improvement of the MI- EEG signal feature expression andclassification precision .

Description

technical field [0001] The present invention belongs to the field of feature extraction and classification of motion imagery electroencephalogram (MI-EEG) signals based on deep convolutional neural network (Deep Convolutional Neural Network, DCNN), and specifically relates to: based on Fast Fourier Transform (FFT) The feature extraction and 2D lead coordinate interpolation imaging method generate MI-EEG pseudo RGB three-channel feature map, and feature dimensionality reduction and classification are performed based on DCNN. Background technique [0002] Deep Convolutional Neural Network (DCNN) is a feed-forward neural network with multiple technologies such as local receptive field, convolution kernel weight sharing, nonlinear activation of neurons, and convolution operation for data dimensionality reduction. It is widely used in field of image recognition. The network has great advantages in feature extraction and dimensionality reduction of multi-dimensional feature data,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
Inventor 李明爱韩健夫杨金福孙炎珺
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products