A saliency-based multimodal small-shot learning method

A learning method, multi-modal technology, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve the problems of data difficulties, cost a lot of manpower and financial resources, limit the applicability of models, etc., to enhance the classification ability, strengthen the Availability, the effect of rich feature representation

Active Publication Date: 2022-04-19
TIANJIN UNIV
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] With the introduction of convolutional neural networks, deep learning has made breakthroughs in image classification, speech recognition, and object detection. However, these studies usually require a large amount of labeled data for training, such as ImageNet, etc., but in reality In daily life, it is very difficult to obtain a large amount of data, such as photos of endangered species, medical images, etc., which seriously limits the applicability of the model in the real world, and the labeling of images will also consume a lot of manpower and financial resources. Able to recognize a new object through a very small number of samples, help to quickly learn new content through previously learned knowledge, and integrate new concepts into existing concept networks to learn new concepts

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A saliency-based multimodal small-shot learning method
  • A saliency-based multimodal small-shot learning method
  • A saliency-based multimodal small-shot learning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0024] The saliency-based multimodal small sample learning method provided by the present invention mainly adds saliency image extraction, multimodal information combination and label propagation operations on the basis of traditional small sample classification. First, the saliency map of the image is obtained through the saliency detection network to obtain the foreground and background regions of the image, and then the semantic information and the visual information are combined through the multi-modal hybrid model, and the semantic information is used to assist the visual information to classify, and finally, the data is used The manifold structure of the data sample is construct...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a saliency-based multi-modal small-sample learning method, which includes two parts: multi-modal combination and label propagation, specifically as follows: in the process of multi-modal combination, the pre-trained saliency detection network is first used to support The saliency map is extracted from the sample image of the sample image, and the foreground and background of the sample image are separated; secondly, the semantic word embedding of the foreground and background regions of the sample image of the support set is obtained through the GloVe model, which is used as semantic information to assist visual information classification; finally, for the obtained support set The foreground, background and semantic information of the sample image are adaptively combined through the modal mixing mechanism to obtain the sample feature representation with multi-modal information; The sample is constructed according to the K-nearest neighbor method; finally, the category of the unlabeled query set sample is predicted through the labeled support set sample.

Description

technical field [0001] The invention relates to a small-sample image classification method, in particular to a small-sample learning method based on salient multimodal data processing. Background technique [0002] With the introduction of convolutional neural networks, deep learning has made breakthroughs in image classification, speech recognition, and object detection. However, these studies usually require a large amount of labeled data for training, such as ImageNet, etc., but in reality In daily life, it is very difficult to obtain a large amount of data, such as photos of endangered species, medical images, etc., which seriously limits the applicability of the model in the real world, and the labeling of images will also consume a lot of manpower and financial resources. It can identify a new object with a very small number of samples, use previously learned knowledge to help quickly learn new content, and integrate new concepts into existing concept networks to learn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/778G06V10/764G06V10/82G06K9/62G06N3/04
CPCG06N3/045G06F18/217G06F18/24
Inventor 翁仲铭陶文源
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products