Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Adaptive network suitable for high-reflection bright spot segmentation in retinal optical coherence tomography image

An optical coherence tomography, adaptive network technology, applied in image analysis, image enhancement, image data processing and other directions, can solve the problems of poor segmentation performance of small targets, inability to adaptively segment objects, etc., achieve good segmentation performance, and optimize the design model. , Overcome the effect of data inconsistency

Pending Publication Date: 2021-02-02
SUZHOU BIGVISION MEDICAL TECH CO LTD
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the encoder-decoder structure of the original U-Net cannot realize the effective extraction and utilization of global features, and cannot adapt to segmentation objects of various shapes and sizes, and has poor segmentation performance for small objects.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adaptive network suitable for high-reflection bright spot segmentation in retinal optical coherence tomography image
  • Adaptive network suitable for high-reflection bright spot segmentation in retinal optical coherence tomography image
  • Adaptive network suitable for high-reflection bright spot segmentation in retinal optical coherence tomography image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0022] Example: Reference figure 1 An adaptive network suitable for the segmentation of highly reflective bright spots in retinal optical coherence tomography images is shown, including a feature encoding module, an adaptive SA module applied to the deep layer of the encoder module, and multiple features set in the decoder channel decoding module.

[0023] The feature encoding module includes a feature extraction unit and a dual residual DR module embedded in the downsampling position of the feature extraction unit. The dual residual DR module includes two residual blocks. The residual block includes a 1×1 convolutional layer, a 3 ×3 convolutional layer, 1×1 convolutional layer, batch normalization processing layer, ReLU activation function. In this embodiment, the feature extraction unit is a U-Net encoder layer, and the output of the fourth layer of the feature extraction unit is connected to the feature input terminal. In order to obtain representative feature maps, the e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-adaptive network suitable for high-reflection bright spot segmentation in a retinal optical coherence tomography image. The self-adaptive network comprises a feature coding module, a self-adaptive SA module and a feature decoding module, the feature coding module comprises a feature extraction unit and a dual residual DR module, the dual residual DR module comprisestwo residual blocks, the adaptive SA module comprises a feature input end, a deformable convolution layer, matrix multiplication and pixel-level summation, and the feature decoding module reconstructshigh-level features generated by the adaptive SA module, gradually performs feature splicing with local information guided by the dual residual DR module through 2 * 2 deconvolution layer deconvolution, and uses a result obtained through 1 * 1 convolution layer convolution as output of the feature decoding module. The method can simplify the learning process of the whole network and enhance gradient propagation while enhancing feature extraction, and can adapt to segmentation targets of different sizes.

Description

technical field [0001] The present application relates to the technical field of retinal OCT image segmentation, in particular to an adaptive network suitable for segmenting highly reflective bright spots in retinal optical coherence tomography images. Background technique [0002] Hard exudates, a prominent fundus change in diabetic retinopathy, appear as hyperreflective bright spots on optical coherence tomography imaging. In recent years, there have been many studies on the detection of hard exudation in retinal color photos, such as the detection method of hard exudation based on support vector machine, the automatic detection method based on k-nearest neighbor map region merging, the detection method based on threshold method, etc. A similar study also segmented bright spots in polarization-sensitive optical coherence tomography (Polarization-Sensitive Optical Coherence Tomography, PS-OCT). But segmentation methods based on deep learning are rare. At present, many dee...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06T7/11G06N3/04G06N3/08
CPCG06T7/0012G06T7/11G06N3/08G06T2207/30041G06T2207/20081G06T2207/20084G06T2207/10101G06N3/045
Inventor 陈新建姚辰璞
Owner SUZHOU BIGVISION MEDICAL TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products