Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A zero sample learning method based on a self-coding generative adversarial network

A sample learning and self-encoding technology, applied in neural learning methods, biological neural network models, computer components, etc., can solve problems such as weakening the alignment relationship of different modalities and ignoring them

Pending Publication Date: 2019-04-09
TIANJIN UNIV
View PDF7 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most of these models only focus on the semantic alignment relationship between category semantics and visual features and ignore the relationship between visual features and category semantics, which weakens the alignment relationship between different modalities.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A zero sample learning method based on a self-coding generative adversarial network
  • A zero sample learning method based on a self-coding generative adversarial network
  • A zero sample learning method based on a self-coding generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The zero-shot learning method based on the self-encoder confrontation generative network of the present invention will be described in detail below with reference to the embodiments and the accompanying drawings.

[0033] The zero-shot learning method based on the self-encoding confrontation generation network of the present invention uses the visual sample data of visible categories and the corresponding category semantic features to train a kind of confrontation generation network based on the self-encoder framework, and its structure diagram is as follows figure 1 shown. The method includes the following steps:

[0034] 1) Input the visual feature x of the visible category sample into the encoder, and obtain the hidden category semantic feature under the supervision of the category semantic feature a corresponding to the sample and latent noise features The encoder is composed of a three-layer network, and the encoder structure is: fully connected layer-hidden la...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a zero sample learning method based on a self-coding generative adversarial network. The method comprises the following steps: inputting visual features of visible category samples and corresponding category semantic features; Inputting a specific balance parameter Lambda and alpha numerical value; setting Initial values and learning rates of the parameters are set, using an Adam optimizer is to train the self-encoding confrontation generation network provided by the invention, and obtiaining model parameters of an encoder and a decoder; Inputting the semantic featuresof which the categories are not seen, and synthesizing the visual features of the corresponding categories by utilizing the trained model parameters; And classifying the test samples of which the categories are not seen. According to the invention, the semantic relationship between the visual modality and the category semantic modality can be effectively aligned. Visual information and category semantic information are fully fused together, semantic association between two modes can be more effectively mined, and more effective visual features are synthesized.

Description

technical field [0001] The invention relates to a zero-sample learning method. In particular, it concerns a zero-shot learning method based on autoencoder adversarial generative networks. Background technique [0002] Advances in deep learning techniques have greatly advanced the fields of machine learning and computer vision. However, most of these techniques are limited to supervised learning, which requires a large number of labeled samples to train the model. In reality, sample labeling is an extremely laborious task. Therefore, the lack of labeled samples is one of the bottlenecks affecting the development of current machine learning. A technology that can still identify these categories in the absence of visually labeled data of target categories is needed. Zero-shot learning is just such a type of technology. [0003] Zero-shot learning is a technique for identifying unseen categories (categories without training data) using data of visible categories, supplemented...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06K9/66G06N3/08
CPCG06N3/08G06V30/194G06F18/2155Y02T10/40
Inventor 于云龙冀中
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products