Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-scale feature fusion ultrasonic image semantic segmentation method based on adversarial learning

A multi-scale feature and semantic segmentation technology, applied in the field of medical image understanding, can solve the problems of reduced resolution of feature maps, failure to consider similarity of pixel features, and insufficient use of local features and global context features.

Active Publication Date: 2018-07-10
CHONGQING NORMAL UNIVERSITY
View PDF6 Cites 160 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current semantic segmentation technology of breast ultrasound images based on deep learning has the following shortcomings: (1) predicting the category of pixels through single-scale features does not make full use of local features and global context features, which is prone to misclassification points; (2) The network passes through multi-level pooling (a typical VGG network has five pooling, and the feature map is reduced to 1 / 32 of the original image). The resolution of the feature map is significantly reduced, making the final segmentation map smaller; (3) Each pixel label The prediction does not consider the similarity between the pixel features, resulting in the lack of high-order spatial continuity in the segmentation map output by the network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-scale feature fusion ultrasonic image semantic segmentation method based on adversarial learning
  • Multi-scale feature fusion ultrasonic image semantic segmentation method based on adversarial learning
  • Multi-scale feature fusion ultrasonic image semantic segmentation method based on adversarial learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0077] It can be derived from Table 2 that the size of the convolution kernel of each anti-convolutional layer is 5×5, and the step size is 2, and the number of convolution kernels of the first to sixth anti-convolutional layers is 32 in order. , 64, 128, 256, 512, 1024; The sizes of the first to third fully connected layers are 1024, 512, and 2, respectively, where 2 represents the two categories of whether the input image comes from a segmentation network or a segmentation label. Specifically, the input of the adversarial identification network is 2 channels, which respectively represent the probability distribution maps of pixels belonging to two categories of normal and lesions. Among them, the segmentation map and segmentation label of each pair of images (B-mode image and elastic image) correspond to two A probability distribution map, one is the probability distribution map of each pixel belonging to the normal tissue, and the other is the probability distribution map of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-scale feature fusion ultrasonic image semantic segmentation method based on adversarial learning, and the method comprises the following steps: building a multi-scale feature fusion semantic segmentation network model, building an adversarial discrimination network model, carrying out the adversarial training and model parameter learning, and carrying out the automatic segmentation of a breast lesion. The method provided by the invention achieves the prediction of a pixel class through the multi-scale features of input images with different resolutions, improvesthe pixel class label prediction accuracy, employs expanding convolution for replacing partial pooling so as to improve the resolution of a segmented image, enables the segmented image generated by asegmentation network guided by an adversarial discrimination network not to be distinguished from a segmentation label, guarantees the good appearance and spatial continuity of the segmented image, and obtains a more precise high-resolution ultrasonic breast lesion segmented image.

Description

Technical field [0001] The invention relates to the technical field of medical image understanding, in particular to a method for semantic segmentation of multi-scale feature fusion ultrasound images based on adversarial learning. Background technique [0002] Breast cancer is a malignant tumor that occurs in breast epithelial tissue, which seriously threatens women's health and quality of life. Ultrasound examination is simple, convenient, economical and radiation-free, and has become an important tool for clinical breast cancer diagnosis. Ultrasound B-mode image combined with ultrasound elastography is an important method for clinical diagnosis of breast diseases. The B mode images the breast tissue structure, and the elastography detects the elasticity of the breast tissue. The two modal images can be compared with each other to more accurately detect and locate breast lesions. [0003] Accurately identifying and segmenting breast lesions from ultrasound images can provide imp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06N3/04
CPCG06V10/267G06V2201/03G06N3/045
Inventor 崔少国张建勋刘畅
Owner CHONGQING NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products