Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A complex visual image reconstruction method based on the dual model of deep codec

A dual model and visual image technology, applied in the field of visual scene reconstruction, can solve the problems of high noise, long reconstruction time, and low image accuracy, and achieve the effects of low noise, short reconstruction time, and high accuracy.

Active Publication Date: 2021-04-30
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at the deficiencies of the background technology, the present invention solves the problems of low accuracy of the reconstructed image, relatively large noise, and relatively long reconstruction time, and improves and designs a dual model based on deep encoding and decoding on the basis of previous research. complex visual image reconstruction method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A complex visual image reconstruction method based on the dual model of deep codec
  • A complex visual image reconstruction method based on the dual model of deep codec

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] A. Coding model:

[0043] Step A1: Perform padding operation on the original natural stimulus image with a size of 256*256*3 to obtain data with a size of 262*262*3. Then three operations are performed on the zero-padded data in sequence, and each operation includes three operations: Convolution (convolution), Batch Normalization (batch normalization), and Relu (corrected linear unit nonlinear function). The convolution kernel sizes of the convolution operations in the three operations are: 7*7, 3*3, 3*3; the convolution steps are 1, 2, and 2; the convolution kernel depths are 64, 128, and 256, respectively. This step finally results in data with a size of 64*64*256.

[0044] Step A2: Use the data with a size of 64*64*256 finally obtained in step A1 as the input of this step, and perform 9 Residual operations on it. Each residual operation does not change the size (first two dimensions) and thickness (third dimension) of the data. So the final size of this step is st...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a complex visual image reconstruction method based on a deep codec dual model, and belongs to the technical field of visual scene reconstruction in biomedical image brain decoding. The present invention first collects and watches functional magnetic resonance signals under a large number of natural images. Then establish four network models respectively: 1. Encoding model, which uses convolutional neural network to encode natural images into voxel signals in the visual area; 2. Decoding model, which uses convolutional neural network and deconvolutional neural network to convert The voxel signal in the visual area is decoded into a natural image; 3. Discrimination of the natural image model, that is, judging the real image and the fake image; 4. Discrimination of the visual area response model, that is, judging the real signal and the false signal. By training the designed four models, the visual scene image can be restored from the brain signal. The invention solves the problem of direct mutual conversion between natural scenes and brain signals for the first time, and can realize the practical application of brain-computer interface scenes.

Description

technical field [0001] The method belongs to the technical field of visual scene reconstruction in brain decoding of biomedical images, and specifically relates to the framework construction of a natural image reconstruction model of functional magnetic resonance images. Background technique [0002] In 2008, Miyawaki et al. asked subjects to watch a lot of flashing checkerboard stimulus pictures, and recorded the BOLD signal response of these stimuli in the early visual cortex (V1 / V2 / V3), using multi-voxel pattern analysis (Multi-voxel pattern classification, MVPA) method, established a multi-scale sparse multinomial logistic regression (SMLR) local decoder model, realized for the first time the brain signal visual image reconstruction not limited to the candidate visual stimulus category, and reconstructed simple geometric images and letters Stimuli, this study provides a new way of interpreting the visual perception state of the brain. However, the method of Miyawaki et ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T9/00
CPCG06T9/001
Inventor 陈华富黄伟王冲颜红梅杨晓青杨天刘秩铭
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products