Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal medical image fusion method based on double-residual ultra-dense network

A medical image and dense network technology, applied in the field of medical image fusion, can solve the problems of loss of useful information and unclear details of the fusion image, and achieve the effect of reducing loss, achieving target integrity and sufficient feature extraction.

Active Publication Date: 2020-11-03
ZHONGBEI UNIV
View PDF7 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Using residual learning or dense connection for image fusion improves the spatial resolution of the fused image, but Densenet and Resnet only use the results of the last layer of the network for feature fusion, which will result in the loss of some useful information obtained by the middle layer. Fusion Image details are not clear

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal medical image fusion method based on double-residual ultra-dense network
  • Multi-modal medical image fusion method based on double-residual ultra-dense network
  • Multi-modal medical image fusion method based on double-residual ultra-dense network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0057] The present invention proposes a multimodal medical image fusion method based on Dual Residual Hyper-Densely Networks (DRHDNs). DRHDNs includes two parts: feature extraction and feature fusion.

[0058] In view of the fact that the double residual dense network does not fully use the useful information extracted by the middle layer, and the super dense connection has achieved good results in medical image segmentation, the present invention applies the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal medical image fusion method based on a double-residual ultra-dense network, and the method comprises the steps: extracting the shallow features of a first modal medical image and a second modal medical image through the convolution of a first Conv layer and the activation of a PReLU layer in the double-residual ultra-dense network; extracting deep features through residual learning and ultra-dense connection; and sequentially performing Concat layer channel dimension splicing in the double-residual ultra-dense network on the deep features, and finally performing Conv layer convolution and PReLU layer activation to obtain a fused image of the first modal medical image and the second modal medical image. The double-residual ultra-dense block provided bycombining the residual dense block and the ultra-dense connection not only applies the dense connection between layers of the same path, but also applies the dense connection between layers crossing different paths, and performs information transmission between two paths for extracting different modal image features, so that the extracted deep features are more detailed and richer, and loss of useful information of the network intermediate layer is reduced.

Description

technical field [0001] The present invention relates to the technical field of medical image fusion, in particular to a multimodal medical image fusion method based on double residual super-dense network. Background technique [0002] Image fusion is widely used in medical imaging, remote sensing, machine vision, biometrics and military applications. The purpose of fusion is to achieve better contrast and perceptual experience. In recent years, with the increasing demand for clinical applications, research on multimodal medical image fusion has attracted much attention. The purpose of multimodal medical image fusion is to provide a better medical image to help doctors perform surgical intervention. [0003] At present, there are many modalities of medical images, such as magnetic resonance (MR) images, computed tomography (CT) images, positron emission tomography (PET) images and X-ray images, etc., and images of different modalities have their own Advantages and limitati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06K9/46G06N3/04G06N3/08G06T7/00
CPCG06T5/50G06T7/0012G06N3/08G06T2207/20221G06T2207/10081G06T2207/10088G06V10/40G06N3/045
Inventor 王丽芳王蕊芳张晋
Owner ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products