Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Medical image deep learning method with interpretability

A medical image and deep learning technology, applied in the field of medical image deep learning, can solve problems such as blurred border pixels, missed or misdiagnosed lesion areas, and easy confusion of surrounding normal physiological features.

Active Publication Date: 2021-05-28
SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, due to the protection of patient privacy and the extremely small proportion of images that can be used for glandular cancer detection and segmentation, the number of images that can be used for training and testing is extremely small; the incidence of different patients is different, and the boundary The pixels are fuzzy, easily confused with the surrounding normal physiological features, and the lesion area may be missed or misdiagnosed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Medical image deep learning method with interpretability
  • Medical image deep learning method with interpretability
  • Medical image deep learning method with interpretability

Examples

Experimental program
Comparison scheme
Effect test

example

[0093] The specific operation process of the present invention will be further described below by way of example. The examples use datasets from the 2018 and 2019 ISIC Glandular Cancer Detection Challenge.

[0094] In the data set used, the 2018 data set has a total of 2594 training images and 1000 test data images, with standard data images (groundtruth); the 2019 data set has a total of 2531 training data images, 823 test data images, also attached There are standard data images (groundtruth). In the training phase, the learning rate is set to 0.005, the number of samples (batch size) for one training is 2000, and the number of iterations is 100.

[0095] Define precision (Precision) and recall (Recall) as the evaluation indicators of the model, namely

[0096]

[0097]

[0098] In the formula, TP represents the number of True Positives, FP represents the number of False Positives, and FN represents the number of False Negatives.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A medical image deep learning method with interpretability comprises the following steps: 1) acquiring an original medical image, performing image enhancement on the original medical image, increasing the number of the images, and inputting the images as input images into a network; 2) extracting image features of the input image by using a VGG-16 network to obtain a medical predictive map conforming to intermediate clinical features; 3) comparing the obtained prediction map with a standard intermediate clinical feature, calculating the cross entropy loss Lossx, y of the two, and adjusting VGG-16 network parameters to make the calculated cross entropy loss smaller than a set threshold so as to complete network learning and make the prediction map obtained by the VGG-16 network accord with the requirement of a comparison result. The network obtained through the method has better detection precision and better interpretability, and subsequent network optimization can be conveniently carried out according to requirements.

Description

technical field [0001] The invention relates to the field of computer images, in particular to an explainable deep learning method for medical images. Background technique [0002] In the diagnosis process of glandular cancer, it is very important to be able to automatically identify pathological regions and divide them accurately. In the field of traditional medicine, this process is generally done manually. However, due to the large number of sliced ​​images, the wide coverage of glands, and the various manifestations due to different physiological characteristics of patients, inexperienced doctors often miss or misdiagnose. Therefore, it is important to be able to perform accurate glandular cancer detection and case division in advance. Among them, the deep learning neural network for accurate division of medical gland image cancer detection can determine the current stage of the disease to a large extent, and help doctors make reasonable diagnosis and treatment. [00...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04G06T7/00G16H50/20
CPCG06N3/08G16H50/20G06T7/0012G06T2207/20081G06T2207/20104G06T2207/20084G06T2207/30096G06N3/045
Inventor 王好谦孙中治杨芳
Owner SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products