Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature Fusion Method Based on Stacked Autoencoder

A stacked self-encoding and feature fusion technology, which is applied in the field of SAE-based feature fusion, can solve the problems of insignificant improvement in feature discrimination, complex fusion network structure, and inability to meet real-time performance, so as to reduce network training time and testing time , simplify the SAE structure, and improve the effect of fusion efficiency

Active Publication Date: 2019-09-13
NAT UNIV OF DEFENSE TECH
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Existing SAE-based feature fusion algorithm (SAE-based feature fusion algorithm can refer to the literature Chen Y, LinZ, Zhao X, et al..Deep Learning-Based Classification of Hyperspectral Data[J].IEEE Journal of Selected Topics in Applied Earth Observations&RemoteSensing,2014,7(6):2094-2107.), the selected feature dimension is high, the fusion network structure is complex, the training time is long, and it cannot meet the real-time requirements
In addition, the redundancy between the features is large, and the complementarity is small, and the feature discrimination is not significantly improved after fusion.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature Fusion Method Based on Stacked Autoencoder
  • Feature Fusion Method Based on Stacked Autoencoder
  • Feature Fusion Method Based on Stacked Autoencoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] The experimental data of the present invention is the MSTAR data set, which includes BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131, ZSU234, 2S1 and other 10 types of military targets such as SAR (synthetic aperture radar, synthetic aperture radar) image slices , figure 1 An example of slices of 10 types of military targets is given in , and the slice size is uniformly cropped to 128×128 pixels.

[0017] figure 2 Be the flow chart of the present invention, in conjunction with a certain experiment of the present invention, concrete implementation steps are as follows:

[0018] The first step is to extract the TPLBP (Three-Patch Local Binary Patterns, local three-patch binary pattern) texture feature of the image. The LBP (Local Binary Patterns) operator is used to obtain the LBP code value of the original image, and then the TPLBP code value is obtained by comparing the LBP value between image blocks, and the histogram vector is obtained by counting the TPLBP code va...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a feature fusion method based on a stacked autoencoder. The technical solution includes the following content: First, extract the local three-patch binary pattern texture feature of the image, select and extract several baseline features of the image by using a feature selection method, and concatenate all the obtained features to obtain a concatenation vector. Then, the concatenated vectors are normalized and then whitened. The whitened result is used as the input of SAE, and SAE is trained by layer-by-layer greedy training method. Finally, using the trained SAE, the SAE is fine-tuned through the softmax classifier to minimize the loss function, and the output of the SAE is a highly discriminative fusion feature vector. The features selected by the present invention have little redundancy, and provide richer information for feature fusion.

Description

technical field [0001] The invention belongs to the technical field of image fusion, and relates to a feature fusion method based on SAE (Stacked Autoencoder, stacked autoencoder), which improves the discrimination and efficiency of fusion features. Background technique [0002] Feature fusion refers to the technology of comprehensive analysis and fusion processing of the extracted feature information. In image understanding, the use of feature fusion can not only increase the feature information of the image, but also effectively integrate the advantages of the original features to obtain a more comprehensive feature expression of the target. Classical feature fusion algorithm (feature fusion algorithm can refer to Wang Dawei, Chen Dingrong, He Yizheng. A review of multi-feature image fusion technology for target recognition[J]. Avionics Technology, 2011,42(2):6-12.), directly Combining features directly in a certain way does not essentially consider the influence of the r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62
CPCG06F18/253
Inventor 计科峰康妙冷祥光邹焕新雷琳孙浩李智勇周石琳
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products