Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Medical image segmentation method based on deep learning

A medical image and deep learning technology, applied in image analysis, image enhancement, image data processing, etc., can solve problems such as inability to extract finer semantic features, susceptible to noise interference, single convolution kernel scale, etc., to achieve image effects The effect of coherence, strong resistance to noise interference, and strong generalization ability

Active Publication Date: 2020-12-29
QINGDAO UNIV
View PDF13 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, continuous pooling operations may lead to the loss of some spatial information, and the convolution kernel scale in the convolution layer is too single to extract finer semantic features, which makes U-Net in some actual medical image segmentation In the scene, it is easy to be disturbed by noise, thus ignoring some details. For example, CN201910158251.5 discloses a brain tumor medical image segmentation method based on deep learning, including training a segmentation model, receiving brain tumor medical image data information to be segmented, Four processes are performed on the received brain tumor medical image data information to be segmented and the output of the segmented results; CN201810852143.3 discloses an image segmentation method based on deep learning, including step a: normalizing the original image, Step b: Input the normalized image into the ResUNet network model, the ResUNet network model extracts the feature map containing global semantic information in the input image, and performs upsampling and feature map stacking processing on the feature map to obtain The final feature map, step c: classify the feature map after the upsampling and stacking processing pixel by pixel, and output the image segmentation result; CN201910521449.5 discloses a lung tissue image segmentation method based on deep learning, through The improved deep learning method of Deeplabv3+ realizes the segmentation of X-ray chest X-ray lung tissue, and X-ray chest X-ray images are input into the segmentation model, wherein, the segmentation model is obtained by training with multiple sets of training data, and the multiple sets of training Each set of training data in the data includes: an X-ray chest image and a corresponding gold standard for identifying lung tissue; obtaining output information of the model, wherein the output information includes the X-ray chest image Segmentation results of lung tissue; CN201911355349.6 discloses a liver CT image segmentation method and device based on a deep learning neural network, the method includes: constructing a U-shaped DenseNet based on a dense convolutional network DenseNet and a U-shaped network Unet Two-dimensional 2D network and U-shaped DenseNet three-dimensional 3D network; based on the automatic context auto-context method, the U-shaped DenseNet 3D network is integrated into the U-shaped DenseNet 2D network, and the U-shaped DenseNet hybrid network is obtained for deep learning training; through training The U-shaped DenseNet hybrid network for liver CT image segmentation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Medical image segmentation method based on deep learning
  • Medical image segmentation method based on deep learning
  • Medical image segmentation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0023] The specific process of realizing medical image segmentation in this embodiment is as follows:

[0024] (2) Obtain medical images, the number of which requires more than 15. Each medical image is equipped with a segmentation mask, which is used as the label image used in model training, and the original medical image and label image are preprocessed and adjusted. resolution so that the image has a width of 256 and a height of 192;

[0025](2) Build a multi-scale semantic convolution module MS Block. The multi-scale semantic convolution module MS Block contains four branches. The first branch is a 3x3 convolution, and the second branch is two consecutive 3x3 convolutions. To replace a 5x5 convolution to achieve the same receptive field, the third branch has three 3x3 convolutions, which are the same as the receptive field of the 7x7 convolution kernel. The first, second, and third branches each have one The residual edge with 1x1 convolution is used to make up for part ...

Embodiment 2

[0031] This embodiment adopts the technical solution of Embodiment 1, and uses Keras as the deep learning framework. The experimental environment is Ubuntu 18.04, NVIDIA RTX 2080Ti (12GB, 1.545GHZ) GPU, the number of network layers is 9 layers, in the first layer network between MS Block1 and MS Block9, t=4, that is, use 1x1 convolution The number of channels is expanded by 4 times. Since the semantic gap between the encoder and the decoder is the largest in the first layer of the network, the most nonlinear transformations should be added. By analogy, from the second layer to the fourth layer of the network, Set t=3, 2, 1 in turn, taking the first layer of the network structure as an example, after the feature map output from MS Block1 passes through the RB Attention structure, it is directly stitched together with the feature map sampled on MS Block8, and finally together Input into MS Block9, this embodiment is consistent with the number of channels of each layer in the exi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of image segmentation, and relates to a medical image segmentation method based on deep learning, which comprises the following steps: firstly, using a novel convolution module in the stages of an encoder and a decoder, and secondly, designing a residual bottleneck structure containing an attention mechanism, and using the residual bottleneck structurein skip layer connection, thereby reducing the semantic difference between the encoder and the decoder, and enabling the neural network to pay more attention to a target area to be segmented in the training process, so that finer semantic features can be extracted. The method is simple, the blurred boundary can be better recognized, the effect of the segmented image is more coherent, the noise interference resistance is high, and the generalization ability is high.

Description

Technical field: [0001] The invention belongs to the technical field of image segmentation, and relates to a medical image segmentation method based on deep learning, and a method for medical image segmentation using deep learning technology. Background technique: [0002] In the early days of image segmentation, most of them were based on graph theory or pixel clustering methods, and many classic algorithms such as K-Means algorithm were born. In terms of medical images, it is often based on edge detection and template matching, such as using Hough transform for optic disc segmentation. However, medical images are often derived from different imaging techniques, such as computed tomography (CT), X-ray, and magnetic resonance imaging (MRI), so these methods fail to remain robust when tested on large amounts of data. After the emergence of deep learning technology, it has solved the problem of lack of semantic information in traditional image segmentation methods to a certai...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06T7/11G06N3/04
CPCG06T7/0012G06T7/11G06T2207/20081G06T2207/20084G06T2207/10068G06T2207/30028G06N3/045
Inventor 李英梁宇翔李志云张宏利朱琦李书达
Owner QINGDAO UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products