Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Unsupervised Image Recognition Method Based on Parameter Transfer Learning

A transfer learning and image recognition technology, applied in the field of image recognition, can solve the problems of long training time and large number of unlabeled samples, reducing training time, solving unsupervised recognition problems, and improving learning efficiency.

Inactive Publication Date: 2020-06-09
HARBIN INST OF TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the problem that the traditional unsupervised image recognition method requires a large number of unlabeled samples and a long training time caused by a large number of unlabeled samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Unsupervised Image Recognition Method Based on Parameter Transfer Learning
  • An Unsupervised Image Recognition Method Based on Parameter Transfer Learning
  • An Unsupervised Image Recognition Method Based on Parameter Transfer Learning

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0035] Specific implementation mode one: as figure 1 As shown, an unsupervised image recognition method based on parameter transfer learning described in this embodiment, the method includes the following steps:

[0036] Step 1. Collect images containing category labels from the auxiliary domain to form an auxiliary domain image set X s ;Collect images without class labels from the application domain to form the application domain image set X t ; The application field refers to various fields that the method of the present invention can be applied to, and the auxiliary field refers to a field where the sample content is similar to the field to be applied and contains a large number of tags;

[0037] Step 2. Construct two convolutional neural networks with the same structure, and use the two convolutional neural networks with the same structure as the auxiliary domain network and the application domain network respectively, where: the auxiliary domain network is recorded as N ...

specific Embodiment approach 2

[0048] Specific implementation mode two: the difference between this implementation mode and specific implementation mode one is: the specific process of the step one is:

[0049] Collect images containing category labels from the auxiliary domain to form an auxiliary domain image set X s ;Collect images without class labels from the application domain to form the application domain image set X t ; where: application domain image set X t The number of image samples in is the auxiliary domain image set X s One-tenth of the number of image samples in ;

[0050] The auxiliary domain image set X s with application domain image set X t All images in are scaled to the same size.

specific Embodiment approach 3

[0051] Specific implementation mode three: the difference between this implementation mode and specific implementation mode one is: the specific process of the step two is:

[0052] Construct two convolutional neural networks with the same structure, and use the two convolutional neural networks with the same structure as the auxiliary domain network and the application domain network respectively, where: the auxiliary domain network is recorded as N s , the application domain network is denoted as N t ;

[0053] Such as figure 2 As shown, each convolutional neural network includes five layers of convolutional layers conv1~conv5 and three layers of fully connected layers fc1~fc3, wherein: the fully connected layer is located after the convolutional layer;

[0054] After the fully connected layer is an image classifier, the image classifier has a total of C branches, where: C represents the total number of image categories that can be recognized; and the output y of the cth ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An unsupervised image recognition method based on parameter transfer learning belongs to the technical field of image recognition. The invention solves the problems that the traditional unsupervised image recognition method requires a large number of unlabeled samples and long training time caused by a large number of unlabeled samples. The present invention directly performs transfer learning on the parameters of the recognition model, and only needs labeled samples in the auxiliary field and a small number of unlabeled samples in the application field to train the recognition model. The method of the present invention overcomes the limitations of traditional unsupervised image recognition methods The problem of requiring a large number of unlabeled samples reduces the dependence on labeled samples, solves the problem of unsupervised recognition, improves the learning efficiency of the model, and is more suitable for application scenarios with large data scale. The invention can be applied to the technical field of image recognition.

Description

technical field [0001] The invention belongs to the technical field of image recognition, and in particular relates to an unsupervised image recognition method. Background technique [0002] Image recognition is a technique for detecting objects of interest from static images or dynamic videos. An effective image recognition method is the premise and basis for realizing intelligent recognition tasks such as object tracking, scene analysis, and environment perception. In real life, image recognition technology has a very wide range of applications, such as pedestrian / vehicle detection technology in the field of autonomous driving, face recognition technology in the field of security, etc., are all based on image recognition. [0003] Most of the current image recognition technologies are designed and implemented based on machine learning theory. The main method is to collect image samples containing category labels from application scenarios and train the recognition model s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/214
Inventor 杨春玲陈宇张岩李雨泽朱敏
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products