Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature visualization method and system of convolutional neural network model based on sparse attention

A convolutional neural network and attention technology, applied in the field of image classification feature visualization, can solve the problems of being unable to locate the largest contribution of image classification results, unable to explain the classification results, etc., to facilitate classification decision-making reasons, improve image classification accuracy, and improve The effect on classification accuracy

Active Publication Date: 2020-04-21
PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU +1
View PDF3 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The accuracy of the image classification model based on the deep convolutional network in the present invention is getting higher and higher, but due to the limitation of the "end-to-end" attribute of the deep network, the classification process is like a "black box", and the classification result cannot be analyzed. In order to explain, it is also impossible to locate the feature of which area of ​​the image contributes the most to the classification result. A feature visualization method and system based on a convolutional neural network model based on sparse attention is proposed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature visualization method and system of convolutional neural network model based on sparse attention
  • Feature visualization method and system of convolutional neural network model based on sparse attention
  • Feature visualization method and system of convolutional neural network model based on sparse attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0060] Such as figure 1 As shown, a feature visualization method based on a convolutional neural network model with sparse attention, including:

[0061] Step S101: using multiple convolutional layers and downsampling layers to extract features from the color images in the input training samples, and output multi-channel feature maps; the training samples are composed of multiple color images and corresponding category labels;

[0062] Specifically, you can design a convolutional layer that meets certain requirements, or you can use the feature extraction part of the commonly used convolutional neural network structure, such as AlexNet, VGGNet, ResNet and other convolutional neural networks and their variants.

[0063] For the input image N represents the number of samples, and the feature extraction process can be formalized as follows:

[0064] F = CONV(x; θ)

[0065] where x i represents the i-th color image, y i means x i Corresponding category label, feature map F ...

Embodiment 2

[0094] Such as image 3 As shown, a feature visualization system based on a convolutional neural network model with sparse attention, including:

[0095] The feature extraction module 201 is used to use multiple convolutional layers and down-sampling layers to perform feature extraction on the color image in the input training sample, and output a multi-channel feature map; the training sample is composed of a plurality of color images and corresponding category labels;

[0096] The attention module 202 is used to adopt the convolution-deconvolution network to realize pixel-level attention through convolution and deconvolution operations; use pixel-level attention to perform weighted adjustments on feature maps, and obtain weighted and adjusted pixel-level attention After the feature map;

[0097] The classification module 203 is used to adopt the cross-entropy loss function as the classification loss function, and perform L1 regularization constraints on the pixel-level atte...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a feature visualization method and system for a convolutional neural network model based on sparse attention, and the method comprises the steps: carrying out the feature extraction of an input color image, and outputting a multichannel feature map; performing weighted adjustment on the feature map by using pixel-level attention; adopting a cross entropy loss function as aclassification loss function, carrying out L1 regularization constraint on pixel-level attention, improving the classification loss function, and training the feature map after weighted adjustment toobtain a classification result; superposing the adjusted feature map with an originally input color image to obtain visual display of important features of the color image so as to give visual explanation of a classification result. The system comprises a feature extraction module, an attention module, a classification module and a feature visualization module. According to the invention, while the image classification accuracy is improved, the most important feature region of the image is visually displayed by using features.

Description

technical field [0001] The invention belongs to the technical field of image classification feature visualization, and in particular relates to a feature visualization method and system of a convolutional neural network model based on sparse attention. Background technique [0002] Existing convolutional neural network visualization methods include methods based on deconvolution, gradient-based, and back-propagation. These methods have a certain effect on the visualization of features learned by convolutional neural networks and class-discriminating features, but most of these methods are only for feature visualization research and do not contribute to the performance of convolutional neural networks themselves. Therefore, starting from this point, the present invention studies how to more accurately locate the most important features of the target object under the condition of improving the classification performance of the convolutional neural network. [0003] On the one...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/46G06N3/04G06N3/08
CPCG06N3/084G06V10/56G06N3/045G06F18/24
Inventor 张文林司念文牛铜罗向阳屈丹杨绪魁李真闫红刚张连海魏雪娟
Owner PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products