Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A domain adaptive deep learning method and a readable storage medium

A deep learning and self-adaptive technology, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems such as the model is not optimal, convergence is more difficult than non-confrontational training, and achieve the effect of improving performance

Active Publication Date: 2019-06-21
NAT INNOVATION INST OF DEFENSE TECH PLA ACAD OF MILITARY SCI
View PDF5 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Domain adversarial training needs to optimize a pair of opposing objective functions at the same time, the convergence of the training process is more difficult than non-adversarial training, and the trained model is often not optimal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A domain adaptive deep learning method and a readable storage medium
  • A domain adaptive deep learning method and a readable storage medium
  • A domain adaptive deep learning method and a readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0021] figure 1 A schematic diagram of the domain-adaptive deep learning training process of the embodiment of the present invention is given. The method mainly includes the following steps:

[0022] Step 1: Rotate and transform the target domain image to obtain a self-supervised learning training sample set;

[0023] Step 2: Jointly train the converted self-supervised learning training sample set and source domain training samples to obtain a deep learning model.

[0024] Step 3: Use the model obtained from the above joint training for the vision task T on the target domain.

[0025] In step 1, the target domain image is rotated and transformed to obtain a training sample set for self-supervised learning. The process first rotates the target domain image by 0°, 90° and 180° respectively, and the three rotation angles correspond to category labels 0, 1 an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a domain self-adaptive deep learning method, which comprises the following steps of: carrying out rotation transformation on a target domain image to obtain a self-supervised learning training sample set; And carrying out joint training on the converted self-supervised learning training sample set and the source domain training sample set to obtain a domain adaptive deep learning model used for a visual task on a target domain. According to the method, a target domain sample does not need to be labeled, the feature representation of the target domain can be effectivelylearned, and the performance of a computer vision task on the target domain is improved. The invention further discloses a domain self-adaptive deep learning readable storage medium which also has theabove beneficial effects.

Description

technical field [0001] The invention relates to the field of domain-adaptive deep learning, in particular to a domain-adaptive deep learning method and a readable storage medium for computer vision tasks. Background technique [0002] Models such as image classification, image semantic segmentation, object recognition, and object detection in computer vision tasks are usually obtained through supervised learning training. Supervised learning, especially supervised learning based on deep neural networks, usually requires a large number of labeled training samples. The labeling of these samples requires a lot of manpower and material resources. For example, image segmentation requires pixel-by-pixel semantic labeling, which is very difficult and costly. After the model is trained on the labeled data, it is applied to the test data. Supervised learning is a very effective method when the test data has the same distribution as the training data. However, in practical applicat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
Inventor 许娇龙聂一鸣肖良朱琪商尔科戴斌
Owner NAT INNOVATION INST OF DEFENSE TECH PLA ACAD OF MILITARY SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products