Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal feature fusion text-guided image restoration method

A technology of feature fusion and image guidance, applied in neural learning methods, character and pattern recognition, biological neural network models, etc. Effect

Active Publication Date: 2020-06-26
FUDAN UNIV
View PDF7 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Although the method of generating images from text can generate some reasonable results, this generation is random, and the size, shape, direction, etc. of objects in the image are not fixed, so it is difficult to be directly used for image restoration.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal feature fusion text-guided image restoration method
  • Multi-modal feature fusion text-guided image restoration method
  • Multi-modal feature fusion text-guided image restoration method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] For an image with a missing object in the central area, mark the missing area as the area to be repaired, and you can use figure 1 The shown network performs image inpainting.

[0046] The specific process is as follows.

[0047] (1) Mark the defect area from the image to be repaired

[0048] For an image with serious object information loss, such as figure 1 The bird image in is missing the central region. First construct an all-zero matrix M with the same size as the input image X, and set the matrix point corresponding to the pixel position of the area to be repaired to 1, that is figure 1 The gray area in the center of the middle defect image is 1, and the rest of the positions are 0.

[0049] (2) Extract text features from the text description T corresponding to the image

[0050] The text description T is fed into a pre-trained recurrent neural network to obtain preliminary sentence features and word embedding features. Sentence features go through a conditi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of digital image intelligent processing, and particularly relates to a multi-modal feature fusion text-guided image restoration method. The method sequentially comprises the following steps that: a network takes a defect image and corresponding text description as input, and is divided into two stages: a rough repair stage and a fine repair stage; in the rough restoration stage, the network maps the text features and the image features to a unified feature space for fusion, and generates a reasonable rough restoration result by using prior knowledgeof the text features; in the fine repair stage, the network generates more fine-grained textures for the rough repair result; reconstruction loss, adversarial loss and text-guided attention loss areintroduced into network training to assist the network in generating more detailed and natural results. Experimental results show that the semantic information of the object in the missing area can bebetter predicted, the fine-grained texture is generated, and the image restoration effect is effectively improved.

Description

technical field [0001] The invention belongs to the technical field of digital image intelligent processing, and in particular relates to an image restoration method, in particular to a text-guided image restoration method with multimodal feature fusion. Background technique [0002] Image inpainting is the task of synthesizing missing or damaged parts of an image. Due to its numerous applications, such as completing occlusion reconstruction, restoring damaged textures, etc., it has become a hot research topic. The key to image inpainting is to preserve the global semantics of the image and recover the real detailed textures of the missing regions. Most traditional methods choose to find similar textures around missing regions to solve the problem of filling holes [1] . Due to the lack of understanding of high-level semantic information, it is difficult for these methods to reconstruct some special textures in images. [0003] In recent years, image inpainting methods ba...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/46G06F40/30G06N3/04G06N3/08
CPCG06N3/08G06V10/44G06V10/56G06N3/045G06F18/241
Inventor 颜波林青
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products