Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal small sample learning method based on significance

A learning method and multi-modal technology, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve problems such as costly manpower and financial resources, limited model applicability, data difficulties, etc., to enhance classification ability, enrich Feature representation, the effect of enhancing usability

Active Publication Date: 2020-11-03
TIANJIN UNIV
View PDF2 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] With the introduction of convolutional neural networks, deep learning has made breakthroughs in image classification, speech recognition, and object detection. However, these studies usually require a large amount of labeled data for training, such as ImageNet, etc., but in reality In daily life, it is very difficult to obtain a large amount of data, such as photos of endangered species, medical images, etc., which seriously limits the applicability of the model in the real world, and the labeling of images will also consume a lot of manpower and financial resources. Able to recognize a new object through a very small number of samples, help to quickly learn new content through previously learned knowledge, and integrate new concepts into existing concept networks to learn new concepts

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal small sample learning method based on significance
  • Multi-modal small sample learning method based on significance
  • Multi-modal small sample learning method based on significance

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0024] The saliency-based multimodal small sample learning method provided by the present invention mainly adds saliency image extraction, multimodal information combination and label propagation operations on the basis of traditional small sample classification. First, the saliency map of the image is obtained through the saliency detection network to obtain the foreground and background regions of the image, and then the semantic information and the visual information are combined through the multi-modal hybrid model, and the semantic information is used to assist the visual information to classify, and finally, the data is used The manifold structure of the data sample is construct...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal small sample learning method based on significance, which comprises a multi-modal combination part and a label propagation part, and specifically comprises the following steps: in a multi-modal combination process, firstly, carrying out significance map extraction on sample images of a support set through a pre-trained significance detection network, and separating out foregrounds and backgrounds of the sample images; secondly, acquiring word embedding of support set sample image foreground and background region semantics through a GloVe model as semanticinformation to assist visual information classification; and finally, performing adaptive combination on foreground, background and semantic information of the acquired support set sample image through a modal mixing mechanism to obtain sample feature representation with multi-modal information; in a label propagation process, firstly, subjecting a support set sample and a query set sample after modal combination to graph construction according to a K-nearest neighbor method; and finally, predicting the category of the query set sample without the label through the support set sample with thelabel.

Description

technical field [0001] The invention relates to a small-sample image classification method, in particular to a small-sample learning method based on salient multimodal data processing. Background technique [0002] With the introduction of convolutional neural networks, deep learning has made breakthroughs in image classification, speech recognition, and object detection. However, these studies usually require a large amount of labeled data for training, such as ImageNet, etc., but in reality In daily life, it is very difficult to obtain a large amount of data, such as photos of endangered species, medical images, etc., which seriously limits the applicability of the model in the real world, and the labeling of images will also consume a lot of manpower and financial resources. It can identify a new object with a very small number of samples, use previously learned knowledge to help quickly learn new content, and integrate new concepts into existing concept networks to learn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/217G06F18/24
Inventor 翁仲铭陶文源
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products