Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

CO-training method based on unlabeled-sample consistency determination

A collaborative training, unlabeled technology, applied in the field of multi-view learning, can solve the problems of multi-angle remote sensing increasing the difficulty of joint analysis of ground objects in the same area, and low classification accuracy of multi-angle remote sensing images, so as to improve the classification effect and improve the classification. The effect of accuracy

Active Publication Date: 2018-11-13
HARBIN INST OF TECH
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to solve the problem that the existing multi-angle remote sensing increases the difficulty of joint analysis of ground objects in the same area, especially the difficulty of analyzing the change of ground objects, which makes the classification accuracy of multi-angle remote sensing images low, and proposes a classifier-based A collaborative training method for unlabeled sample consistency determination (CO-training with Unlabeled Sample's Consistency hereinafter referred to as CO-USC)

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • CO-training method based on unlabeled-sample consistency determination
  • CO-training method based on unlabeled-sample consistency determination
  • CO-training method based on unlabeled-sample consistency determination

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0025] Specific implementation mode 1: The specific process of a collaborative training method based on unlabeled sample consistency judgment in this implementation mode is as follows:

[0026] Consistency judgment is: compare the difference in classification performance of classifiers adding unlabeled samples, and determine the confidence of unlabeled samples by comparing the consistency of unlabeled samples before and after adding unlabeled samples to the classifier; based on a simple idea, if a A sample and a label are added to the training sample of the classifier. If the classification effect of the classifier is exactly the same, it can be considered that the sample corresponds to the label completely. In other words, the classifier adds a new training sample and label. , the closer the classification performance is, the higher the confidence between the sample and the corresponding label will be; however, this idea is of limited value for ordinary single-view samples, al...

specific Embodiment approach 2

[0038] Embodiment 2: This embodiment differs from Embodiment 1 in that: the classifier in step 1 is a supervised or semi-supervised classifier.

[0039] Other steps and parameters are the same as those in Embodiment 1.

specific Embodiment approach 3

[0040] Embodiment 3: The difference between this embodiment and Embodiment 1 or 2 is that in step 2, the confidence and pseudo-label of the unlabeled sample in the captured image are determined, and according to the confidence and pseudo-label of the unlabeled sample in the captured image, The label selects the trusted samples of the unlabeled samples in the captured image; the specific process is:

[0041] Step two one:

[0042] Take the first perspective as the main perspective, and the rest of the perspectives as non-main perspectives,

[0043] Perform USC judgment in the non-main view to obtain the USC sequence of the non-main view. The number of USC sequences of the non-main view is N-1. The USC sequences of the non-main view are superimposed, and the unlabeled samples of the main view are obtained according to the superimposed sequence. USC confidence (the lower the value of the superimposed sequence, the higher the USC confidence); (USC judgment is performed in the non...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a CO-training method based on unlabeled-sample consistency determination, relates to CO-training and multi-angle image classification, and aims to solve the problem that existing multi-angle remote sensing increases difficulty of joint analysis on ground objects of the same area, especially difficulty of change analysis of the ground objects, and enables classification accuracy of multi-angle remote sensing images to be low. A process includes: 1, carrying out initial classification: 2, selecting trusted samples of unlabeled samples in shot images; 3, obtaining a retrained classifier of a current view angle until retraining of classifiers corresponding to all view angles is completed; 4, obtaining a classification result of the current view angle until reclassification of the classifiers corresponding to all the view angles is completed; 5, repeatedly executing 2, 3 and 4 until an iteration termination condition is met, obtaining a classification result of eachview angle, carrying out voting, and using labels of highest voting rates as labels of a sample which is in the shot images in 1 and has no label. The method is used in the field of digital image processing.

Description

technical field [0001] The invention belongs to the field of digital image processing, relates to collaborative training and multi-angle image classification, and is a multi-angle learning method. Background technique [0002] Feature data obtained from different levels of the same object or obtained from different channels are generally called multi-view data. Multi-perspective learning usually needs to follow two principles: the principle of consistency and the principle of complementarity; the principle of consistency means that different perspectives of the same object are related to each other, and the principle of complementarity means that different perspectives of the same object are different and can be as complementary features. Existing multi-view learning algorithms are mainly divided into the following three categories: collaborative training, subspace learning and multi-kernel learning. Among them, the collaborative training algorithm learns two or more diffe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/2155G06F18/24
Inventor 谷延锋李天帅
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products