Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multimodal medical image fusion method based on global information fusion

A medical image and global information technology, applied in the field of medical image fusion based on deep learning, can solve the problems of limited fusion performance, time-consuming dictionary learning, insufficient multi-modal image information, etc., to enhance the fusion effect and improve the mosaic phenomenon Effect

Pending Publication Date: 2022-05-31
UNIV OF SCI & TECH OF CHINA
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Traditional medical image fusion methods have achieved good fusion results, but there are also some deficiencies that limit the further improvement of fusion performance
First, the fusion performance of traditional methods relies heavily on artificially defined features, which limits the generalization of the method to other fusion tasks
Second, different features may require different fusion strategies to function
Third, for the fusion method based on sparse representation, its dictionary learning is relatively time-consuming, so it takes more time to synthesize a fusion image
However, these fusion strategies cannot effectively extract the global semantic information of multimodal images
In addition, the current medical image fusion method based on deep learning has insufficient and imprecise problems in the utilization of multi-modal image information.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multimodal medical image fusion method based on global information fusion
  • Multimodal medical image fusion method based on global information fusion
  • Multimodal medical image fusion method based on global information fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] In this embodiment, a multimodal image fusion method based on global information fusion, such as figure 1 shown, including the following steps:

[0055] Step 1. Obtain M original medical images of different modalities and perform preprocessing of color space conversion and image clipping to obtain the preprocessed image block set of all modalities {S 1 , S 2 ,...,S M}, where S mDenote the set of image patches for the mth modality, m ∈ {1, 2, ..., M}:

[0056] Step 1.1, obtain the original medical images of multiple modalities required for the experiment from the Harvard Medical Image Dataset website (http: / / www.med.harvard.edu / AANLIB / home.html); this embodiment uses the public data The set collects medical images of M=2 modalities, including 279 pairs of MR-T1 and PET images and 318 pairs of MR-T2 and SPECT images, where MR-T1 and MR-T2 are grayscale anatomical images, and the number of channels is 1, PET and SPECT are functional images in RGB color space, and the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal medical image fusion method based on global information fusion. The method comprises the following steps: 1, carrying out preprocessing of color space conversion and image cutting on original multi-modal medical images; 2, establishing a modal branch network which interacts through a fusion module at multiple scales, and establishing the fusion module formed by Transform to merge multi-modal feature information; 3, establishing a reconstruction module, and synthesizing a fusion image from the multi-scale multi-modal features; 4, training and evaluating the model on a public data set; and 4, realizing a medical image fusion task by using the trained model. According to the method, multi-modal semantic information can be fully fused through the Transform fusion module and the interactive modal branch network, the fine-grained fusion effect is achieved, the structure and texture information of an original image are well reserved, and the mosaic phenomenon caused by a low-resolution medical image is improved.

Description

technical field [0001] The invention relates to the technical field of image fusion, in particular to a medical image fusion technology based on deep learning. Background technique [0002] Medical images can assist doctors to better understand the structure and organization of the human body, and are widely used in clinical applications such as disease diagnosis, treatment planning, and surgical guidance. Due to the different imaging mechanisms, medical images of different modalities pay different attention to human organs and tissues. Medical images of a single modality often cannot provide comprehensive and sufficient information. Doctors often need to observe multiple images at the same time to make an accurate judgment on the condition, which will inevitably bring certain difficulties to diagnosis. Due to the limitations of single-modal medical images, multi-modal medical image fusion is a very necessary research field. Multimodal medical image fusion refers to the sy...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V10/80G06V10/82G06V10/774G06N3/04G06N3/08G06K9/62
CPCG06N3/088G06N3/045G06F18/2155G06F18/253Y02D10/00
Inventor 陈勋张静刘爱萍张勇东吴枫
Owner UNIV OF SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products