Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image semantic division method based on depth full convolution network and condition random field

A conditional random field, fully convolutional network technology, applied in the field of image understanding

Inactive Publication Date: 2018-05-22
CHONGQING UNIV OF TECH
View PDF4 Cites 181 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problems existing in the existing methods, the present invention provides an image semantic segmentation method based on a deep full convolutional network and a conditional random field. The method introduces dilated convolution and spatial pyramid pooling modules into the deep full convolutional network, and The label prediction map output by the deep full convolutional network is further corrected using conditional random fields; the expansion convolution expands the receptive field while ensuring that the resolution of the feature map remains unchanged; the spatial pyramid pooling module extracts regional contexts of different scales from the convolutional local feature map Features, which provide the relationship between different objects and the connection between objects and regional features of different scales for label prediction; the fully connected conditional random field further optimizes the pixel label according to the feature similarity of pixel intensity and position, thereby generating high-resolution, boundary Precise and spatially continuous semantic segmentation map

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image semantic division method based on depth full convolution network and condition random field
  • Image semantic division method based on depth full convolution network and condition random field
  • Image semantic division method based on depth full convolution network and condition random field

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] In order to make the technical means, creative features, goals and effects achieved by the present invention easy to understand, the present invention will be further described below in conjunction with specific illustrations and preferred embodiments.

[0071] Please refer to Figure 1 to Figure 3 As shown, the present invention provides a method for image semantic segmentation based on deep fully convolutional networks and conditional random fields, comprising the following steps:

[0072] S1. Construction of deep full convolutional semantic segmentation network model:

[0073] S11. The deep full convolution semantic segmentation network model includes a feature extraction module, a pyramid pooling module, and a pixel label prediction module. The feature extraction module extracts image parts by performing convolution, maximum pooling, and dilated convolution operations on the input image. feature; the pyramid pooling module performs different scale space pooling on ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an image semantic division method based on a depth full convolution network and a condition random field. The image semantic division method comprises the following steps: establishing a depth full convolution semantic division network model; carrying out structured prediction based on a pixel label of a full connection condition random field, and carrying out model training, parameter learning and image semantic division. According to the image semantic division method provided by the invention, expansion convolution and a spatial pyramid pooling module are introduced into the depth full convolution network, and a label predication pattern output by the depth full convolution network is further revised by utilizing the condition random field; the expansion convolution is used for enlarging a receptive field and ensures that the resolution ratio of a feature pattern is not changed; the spatial pyramid pooling module is used for extracting contextual features of different scale regions from a convolution local feature pattern, and a mutual relation between different objects and connection between the objects and features of regions with different scales are provided for the label predication; the full connection condition random field is used for further optimizing the pixel label according to feature similarity of pixel strength and positions, so that a semantic division pattern with a high resolution ratio, an accurate boundary and good space continuity is generated.

Description

technical field [0001] The invention relates to the technical field of image understanding, in particular to an image semantic segmentation method based on a deep full convolution network and a conditional random field. Background technique [0002] Image semantic segmentation is to label image pixels according to their semantics to form different segmentation regions. Semantic segmentation is the cornerstone technology of image understanding, and it plays a pivotal role in street view recognition and understanding of automatic driving systems, judgment of UAV landing sites, and lesion recognition and positioning of medical images. [0003] The emergence of deep learning technology has significantly improved the performance of image semantic segmentation compared with traditional methods. Supervised learning on large datasets using deep convolutional neural networks is currently the mainstream method for image semantic segmentation. Input the image to be segmented, use con...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/11G06K9/62
CPCG06T7/11G06T2207/20081G06F18/214
Inventor 崔少国王勇
Owner CHONGQING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products