Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Contrastive Self-Supervised Learning Method Based on Multi-Network Framework

A supervised learning and multi-network technology, applied in neural learning methods, biological neural network models, instruments, etc., can solve problems such as poor sample recognition ability, increased network parameters, increased training time, etc., to improve network model performance and enhance training effects, performance-enhancing effects

Active Publication Date: 2022-07-08
NANJING UNIV OF POSTS & TELECOMM
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Although deep learning methods have achieved excellent results in computer vision tasks, there is still a problem: the performance of deep neural networks is heavily dependent on a large amount of labeled data
Classic self-supervised learning methods include MoCo, SimCLR, BYOL, SimSiam, etc., but these learning methods also have their own problems; for example, MoCo (Momentum Contrast, momentum comparison) proposed a momentum update framework and used queues to maintain negative samples. The best performance, but there may be samples of the same type or similarity to the positive samples in the negative samples, and staying away from such negative samples may affect network performance; SimCLR (A Simple Framework for Contrastive Learning of Visual Representations, a simple framework for visual representation comparison learning) concluded The importance of the number of negative samples, data enhancement methods, and adding an additional Multi-Layer Perceptron (MLP) to the network model training, but the negative samples used in this method come from other images in the batch, so the performance of the method depends heavily on Due to the size of image batch processing (Bitch Size), and requires more training cycles (or iterations), it has high requirements for computer performance, especially GPU; BYOL (Bootstrap Your Own Latent, bootstrap your own potential) is summarizing the previous work On the basis of not using negative samples, it can still ensure that the trained network model has excellent performance. However, it does not use negative samples, but uses an asymmetric structure. Compared with the symmetrical structure, the network parameters are slightly increased, which increases the training time. Only using two augmented views from the same image as positive samples may make the model less able to recognize samples of the same kind but with large differences; SimSiam (Simple Siamese networks, simple twin networks) has the same network structure as BYOL Compared with the loss function, the only difference is that the momentum encoder is not used, but the stop gradient update is directly used, which can be regarded as a simplified BYOL. Compared with BYOL, the training speed is faster and can converge faster. When the training cycle is small, the performance More than BYOL, but worse than BYOL as the number of cycles increases

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Contrastive Self-Supervised Learning Method Based on Multi-Network Framework
  • A Contrastive Self-Supervised Learning Method Based on Multi-Network Framework
  • A Contrastive Self-Supervised Learning Method Based on Multi-Network Framework

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] In order to make the content of the present invention easier to understand clearly, the present invention will be described in further detail below with reference to the accompanying drawings.

[0027] like figure 1 As shown, a comparative self-supervised learning method based on a multi-network framework, the steps are:

[0028] Step 1. Perform data augmentation on each image in the training set to obtain three independent augmented views;

[0029] Perform data augmentation on the images in the training set. The data augmentation strategies used include random cropping and resizing, horizontal flipping, color distortion and grayscale conversion, and random Gaussian blurring is applied to the above data;

[0030] Step 2. Input the three augmented views into the backpropagation network, the stop gradient network and the momentum network respectively to obtain the corresponding image representation;

[0031] Among them, the backpropagation network includes an encoder and ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a comparative self-supervised learning method based on a multi-network framework, comprising the steps of: applying a data augmentation method to each image in a training set to obtain three independent augmented views; Input to the designed backpropagation network, stop gradient network and momentum network; calculate the loss value of the output vector between the backpropagation network and the stop gradient network, the backpropagation network and the momentum network, respectively, in combination with the negative sample queue, and add them together. Then, the total loss value is obtained; the parameters of the backpropagation network are gradient updated by minimizing the total loss value; the parameters of the stop gradient network and the momentum network are updated by using the parameters of the backpropagation network; the negative sample queue is updated with the momentum network. The present invention is based on the classic self-supervised learning method, by using a multi-network framework to introduce more positive sample pairs, and combining end-to-end and momentum mechanisms to introduce more negative samples to achieve better pre-training effects.

Description

technical field [0001] The invention relates to the field of self-supervised visual representation learning, in particular to a comparative self-supervised learning method based on a multi-network framework. Background technique [0002] In recent years, with the rapid development of the Internet and the maturity of multimedia technology, the degree of digitization and informatization of society is constantly improving, especially the arrival of the era of big data, which makes digital information resources in a stage of explosive growth. With the popularization of smart mobile terminals such as smart phones and tablets, digital images have become an indispensable part of people's daily life, and they have played a very important role in social interaction, shopping, and learning. Nowadays, a large number of digital images are uploaded and shared on the Internet every day, and the image data resources have shown an explosive growth trend. How to classify and retrieve these ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/764G06V10/74G06V10/774G06V10/82G06K9/62G06N3/08
CPCG06N3/084G06F18/22G06F18/214G06F18/24
Inventor 龙显忠张智猗
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products