Training method and segmentation method and apparatus for a segmentation learning network of a 3D image and medium

A technology for learning networks and training methods, applied in the field of non-transitory computer-readable media, which can solve problems such as overfitting of learning networks, scarcity of training samples, imbalance of foreground and background, etc., achieve high training success rate and avoid overfitting Problem, effect of fast and accurate segmentation

Active Publication Date: 2019-08-20
SHENZHEN KEYA MEDICAL TECH CORP
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The object of interest as the foreground only occupies a small part of the entire image, which will lead to serious foreground and background imbalance problems in the training of the learning network, which will lead to training difficulties
In addition, training samples are scarce due to the tediousness of accurately delineating tumor boundaries, which often leads to the overfitting problem of the learned network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method and segmentation method and apparatus for a segmentation learning network of a 3D image and medium
  • Training method and segmentation method and apparatus for a segmentation learning network of a 3D image and medium
  • Training method and segmentation method and apparatus for a segmentation learning network of a 3D image and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] image 3 A flowchart of a training method 300 of a segmentation learning network for a 3D image according to an embodiment of the present disclosure is shown. The segmentation learning network and its training method are especially suitable for 3D images whose object of interest occupies less than a predetermined threshold (the foreground and background are out of balance to some extent). In some embodiments, the predetermined threshold is in the range of 0.0001% to 30%. For example, 3D MRI images of the brain containing nasopharyngeal carcinoma or nasopharyngeal staging tumors, 3D volumetric CT images of the abdomen containing small lung nodules, and whole body 3D containing early lesions of small and irregularly shaped unknown parts Volume CT images and so on.

[0026] Such as image 3 As shown, the method 300 starts at step 301, which constructs the segmentation learning network based on the sequential joint of a plurality of dense blocks. The basic units in the dense bl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present disclosure relates to a training method and a segmentation method and apparatus for a segmentation learning network of a 3D image and a medium. The 3D image comprises a target object and the proportion occupied by the target object is lower than a preset threshold value. The method comprises the following steps that: a segmentation learning network is constructed based on the sequential combination of a plurality of dense blocks, dense connection exists among basic units in the dense blocks, and each basic unit consists of a batch normalization layer, a RELU layer and a rolling layer; a segmented learning network is trained by a processor based on a training dataset of the 3D image using a loss function that focuses more on difficult samples and penalizes negative voxels that are far away from the object of interest. The method can rapidly and accurately segment irregular and small attention objects by using the learning network with a more compact structure and fewer parameters, and the training process of the learning network can solve the unbalance problem of samples and foreground and background and avoid the problem of over-fitting caused by lack of training samples as much as possible.

Description

[0001] cross reference [0002] This application claims the priority of the U.S. Provisional Application No. 62 / 675,765 filed on May 24, 2018, the entire contents of which are incorporated herein by reference. Technical field [0003] The present disclosure generally relates to image processing and analysis. More specifically, the present disclosure relates to a training method of a learning network for 3D image segmentation, a 3D image segmentation method, and a segmentation device, and a non-transitory computer-readable medium having a corresponding program stored thereon. Background technique [0004] Cancer is one of the major diseases facing human beings, and early detection of cancer can greatly increase the survival rate. However, early-stage tumors are usually irregular in shape and account for a small proportion of corresponding medical images. Take nasopharyngeal carcinoma (NPC) as an example, which is one of the most common cancers, accounting for 0.7% of all cancers. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06T7/12G06T7/194
CPCG06T7/0012G06T2207/10088G06T2207/20081G06T2207/20084G06T2207/30096G06T7/12G06T7/194
Inventor 宋麒孙善辉尹游兵
Owner SHENZHEN KEYA MEDICAL TECH CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products