Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-domain variational adversarial self-coding method

A self-encoding and self-encoder technology, applied in the field of cross-domain variational confrontation with self-encoding, to achieve good results

Active Publication Date: 2019-09-06
BEIFANG UNIV OF NATITIES
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the fields of industrial design and virtual reality, designers always hope to provide a picture to generate a series of continuously transformed pictures of the target domain. Existing methods cannot meet this demand.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-domain variational adversarial self-coding method
  • Cross-domain variational adversarial self-coding method
  • Cross-domain variational adversarial self-coding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with specific examples.

[0039] The cross-domain variational adversarial self-encoding method provided in this embodiment realizes the one-to-many continuous transformation of cross-domain images without providing any paired data, such as figure 1 As shown in , showing our overall network framework, the encoder decomposes samples into content-encoded and style encoding Content coding is used for confrontation, and style coding is used for variation. The decoder concatenates the content code and the style code to generate an image. It includes the following steps:

[0040] 1) Use encoders to decouple content encoding and style encoding for cross-domain data.

[0041] Firstly, the content coding and style coding of the image are decomposed by the encoder, and the corresponding posterior distribution is obtained. For content encoding, an adversarial autoencoder (AAE) is introduced; for style encod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-domain variational adversarial self-coding method. The method comprises the following steps: 1) decoupling content coding and style coding of cross-domain data by usingan encoder; 2) fitting a content code and a style code of the image by using a countermeasure operation and a variation operation respectively; and 3) realizing image reconstruction by splicing content codes and style codes, and obtaining one-to-many continuous transformation of the cross-domain image by splicing the content codes and the style codes of different domains in a cross manner. According to the method, one-to-many continuous transformation of the cross-domain image is realized on the premise of not providing any pair of data.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a cross-domain variational confrontation self-encoding method. Background technique [0002] In the field of computer vision, the use of single-domain data for image generation and image translation has achieved very good results. However, in real life and applications, these data usually come from different domains. For example, an object can have two representations of sketch and view, the same text content can be in different fonts, and so on. How to process cross-domain data is an important research direction. Existing cross-domain work mainly focuses on generating confrontation network GAN. This type of method achieves image generation by automatically fitting the posterior distribution through adversarial learning on data from different domains. In the learning process, pairs of data samples are always required, which has relatively high requirements for...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/214
Inventor 白静田栋文张霖杨宁
Owner BEIFANG UNIV OF NATITIES
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products