Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Complex visual image reconstruction method based on depth encoding and decoding dual model

A dual model and visual image technology, applied in the field of visual scene reconstruction, can solve problems such as long reconstruction time, large noise, and low image accuracy

Active Publication Date: 2018-09-25
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF5 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at the deficiencies of the background technology, the present invention solves the problems of low accuracy of the reconstructed image, relatively large noise, and relatively long reconstruction time, and improves and designs a dual model based on deep encoding and decoding on the basis of previous research. complex visual image reconstruction method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Complex visual image reconstruction method based on depth encoding and decoding dual model
  • Complex visual image reconstruction method based on depth encoding and decoding dual model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] A. Coding model:

[0043] Step A1: Perform padding operation on the original natural stimulus image with a size of 256*256*3 to obtain data with a size of 262*262*3. Then three operations are performed on the zero-padded data in sequence, and each operation includes three operations: Convolution (convolution), Batch Normalization (batch normalization), and Relu (corrected linear unit nonlinear function). The convolution kernel sizes of the convolution operations in the three operations are: 7*7, 3*3, 3*3; the convolution steps are 1, 2, and 2; the convolution kernel depths are 64, 128, and 256, respectively. This step finally results in data with a size of 64*64*256.

[0044] Step A2: Use the data with a size of 64*64*256 finally obtained in step A1 as the input of this step, and perform 9 Residual operations on it. Each residual operation does not change the size (first two dimensions) and thickness (third dimension) of the data. So the final size of this step is st...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a complex visual image reconstruction method based on a depth encoding and decoding dual model and belongs to the visual scene reconstruction technology field in a biomedical image brain decoding. The method is characterized by firstly, collecting and watching functional magnetic resonance signals under a lot of natural images; and then establishing four network models: 1,a coding model, 2, a decoding model, 3, a natural image discrimination model and 4, a visual area response discrimination model, wherein in the coding model, a convolutional neural network is used tocode the natural images into the voxel signals of a visual area; in the decoding model, the convolutional neural network and a deconvolution neural network are used to decode the voxel signals of thevisual area into the natural images; in the natural image discrimination model, true images and false images are discriminated; and in the a visual area response discrimination model, true signals andfalse signals are discriminated. Through training the four designed models, visual scene images can be recovered from a brain signal. In the invention, for the first time, the problem of direct conversion between a natural scene and the brain signal is solved, and the practical application of a brain-computer interface scene can be realized.

Description

technical field [0001] The method belongs to the technical field of visual scene reconstruction in brain decoding of biomedical images, and specifically relates to the framework construction of a natural image reconstruction model of functional magnetic resonance images. Background technique [0002] In 2008, Miyawaki et al. asked subjects to watch a lot of flashing checkerboard stimulus pictures, and recorded the BOLD signal response of these stimuli in the early visual cortex (V1 / V2 / V3), using multi-voxel pattern analysis (Multi-voxel pattern classification, MVPA) method, established a multi-scale sparse multinomial logistic regression (SMLR) local decoder model, realized for the first time the brain signal visual image reconstruction not limited to the candidate visual stimulus category, and reconstructed simple geometric images and letters Stimuli, this study provides a new way of interpreting the visual perception state of the brain. However, the method of Miyawaki et ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T9/00
CPCG06T9/001
Inventor 陈华富黄伟王冲颜红梅杨晓青杨天刘秩铭
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products