Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal deep leaning classification method based on semi supervision

A technology of deep learning and classification methods, applied in neural learning methods, character and pattern recognition, biological neural network models, etc., can solve problems such as lack of labeled samples

Inactive Publication Date: 2018-04-24
SHENYANG AEROSPACE UNIVERSITY
View PDF9 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, more researchers directly use deep models to fuse part of the modal information, and fewer researchers construct deep network architectures based on the differences in modal classification contributions.
In addition, the classification performance of images mainly depends on a large number of training samples, but in reality there are often insufficient labeled samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal deep leaning classification method based on semi supervision
  • Multi-modal deep leaning classification method based on semi supervision
  • Multi-modal deep leaning classification method based on semi supervision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

[0029] figure 1 The system structure of the embodiment of the present invention is shown. 102 is the input of hyperspectral image samples and label information, 103 represents our preprocessing of received samples, 104 represents our grouping of processed samples, and 105 is sending the grouped samples to semi-supervised multiple Learning in the modal deep learning framework, 106 represents combining the results of each deep learning framework to make a decision process, and 107 represents obtaining the final classification result.

[0030] figure 2 and image 3 The specific practical steps of the semi-supervised multi-modal deep learning framework of the embodiment of the present invention are shown. In this scheme, different modal data of hyperspectral images are fed into the deep neural network, and a semi-supervised method is used to ut...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

While deep learning is used for classification, multi-modal information with rich samples and classification contribution variability of each modality are considered, and the problem of insufficient samples is solved by using a semi supervision method. Data of different modalities of a hyperspectral image is sent into a deep neural network, the semi supervision method is used and a large number ofunlabeled samples are utilized, and the deep neural network based on self-encoding is used for feature learning. All labeled and unlabeled data are sent into the self-encoding deep neural network tocarry out learning, similar networks are designed for different modalities, a respective initialization parameter is obtained through self-encoding reconstruction, and hidden attributive classification of labeled samples is obtained through a clustering method. For the unlabeled data, a deep characteristic is calculated through a multi-target deep network, then a similar marked sample is searchedbased on a clustering label, and finally, labels of the unlabeled samples are predicted according to the label information of the labeled samples.

Description

technical field [0001] The present invention relates to a semi-supervised method based on semi-supervised multi-modal deep learning to solve the problem of insufficient samples while considering the rich multi-modal information of samples and the differences in the classification contributions of each modality while using deep learning to classify. Learn how to classify. Background technique [0002] Hyperspectral remote sensing images have multiple modal information such as texture, spatial correlation, and spectrum. The fusion of so many aspects of feature information enables hyperspectral remote sensing to detect more ground object information, which greatly improves human understanding of the objective world. cognitive ability. [0003] At present, more researchers directly use deep models to fuse part of the modal information, and fewer researchers construct deep network architectures for the differences in the classification contribution of modalities. In addition, t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/08G06N3/04G06K9/62
CPCG06N3/08G06V20/13G06N3/045G06F18/23G06F18/241
Inventor 李照奎黄林刘翠微王天宁张德园赵亮石祥滨王岩吴昊
Owner SHENYANG AEROSPACE UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products