Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Text generation image method based on remote related attention generative adversarial network

A technology for generating images and attention, applied in biological neural network models, image data processing, 2D image generation, etc., can solve problems such as inability to judge whether the output result is real or not

Inactive Publication Date: 2021-03-12
HUNAN UNIV
View PDF1 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The two networks fight against each other and constantly adjust the parameters. The ultimate goal is to make the discriminative network unable to judge whether the output of the generating network is true.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Text generation image method based on remote related attention generative adversarial network
  • Text generation image method based on remote related attention generative adversarial network
  • Text generation image method based on remote related attention generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] In order to make the above objects, features, and advantages of the present invention, the technical solutions in the embodiments of the present invention will be apparent from the embodiments of the present invention. The specific implementation process of the present invention is as follows:

[0056] Step 1: Divide the data set, divided into training sets and test sets. Follow the previous text to image methods, our approach at the Caltech-UCSD BIRDS 200 Data Set (CUB) and Microsoft Common Objects (Coco). The CUB dataset contains 11,788 bird images that belong to 200 categories, each with 10 visual description statements. For text-to-image synthesis, COCO data sets more diverse and challenging, and has 80K training images and 40K test images. Each image has 5 visual description statements.

[0057] Step 2: Data pretreatment. The pre-processing step is: Build a dictionary and add NULL in the dictionary; build a text vector, using a one-dimensional vector of length 18, the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a text generation image method based on a remote related attention generative adversarial network. According to the invention, a reversible adversarial network (LRDAGAN) generated by remote dependence attention is used. Remote dependent learning is brought into a generation process, more specifically, a feature map of a generator is analyzed into several parts, semantic consistency of the parts is enhanced by using word-level features, and a widely used method in the past is further improved so as to better guide picture generation, so that the method not only can generate a high-quality image, but also can generate a better semantic consistency image.

Description

Technical field: [0001] The present invention relates to the field of writing processing, and more particularly to a text generating image method based on remote correlation attention to generating a network. Background technique: [0002] 3, according to the text description, generate a picture corresponding to its semantics and in line with the real situation involving multiple aspects, first need to process text description according to the knowledge of natural language processing, next to generating corresponding pictures, need to use computer vision Relevant technology. Most current existing technical methods are divided into two parts to realize text generation pictures: [0003] 1. Text coding, the text coding section uses two neural networks to handle text and pictures, and constantly learn from the network to the same vector space. For example, in the AttNGan network, use a CNN convolutional neural network network commonly used in image processing to acquire image featur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T11/00G06N3/04
CPCG06T11/001G06N3/045
Inventor 全哲陈杨阳王梓旭
Owner HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products