Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Full convolutional network semantic segmentation method based on multi-scale low-level feature fusion

A fully convolutional network and multi-scale feature technology, applied in the field of full convolutional network semantic segmentation, can solve problems such as rough segmentation of objects and inability to recognize small-scale objects, and solve the problem of edge blurring, improve recognition effects, and strengthen The effect of sensitivity

Active Publication Date: 2018-11-16
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 66 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although these algorithms have expanded the receptive field of the fully convolutional neural network to a certain extent, it is easy to make the edge of the segmented object very rough, and it is incapable of recognizing small-scale objects.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Full convolutional network semantic segmentation method based on multi-scale low-level feature fusion
  • Full convolutional network semantic segmentation method based on multi-scale low-level feature fusion
  • Full convolutional network semantic segmentation method based on multi-scale low-level feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0032] Such as figure 1 as shown, figure 1 It is a flowchart of an embodiment of the fully convolutional neural network based on multi-scale low-level feature fusion of the present invention. This embodiment comprises the following steps:

[0033] 1) Use a fully convolutional neural network to extract dense features from the input image;

[0034] 2) Perform multi-scale feature fusion processing on the extracted features;

[0035] 3) The image after multi-scale feature fusion is processed through 3×3 convolution layer, category convolution layer and bilinear interpolation upsampling to obtain a score map of the same size as the original image, so as to realize the semantic segmentation task of the image.

[0036] Image semantic segmentation is a typical problem of predicting the semantic category of each pixel through dense feature extraction. Therefore, to improve the category prediction accuracy of each pixel, it is necessary to use global and fine feature expression. The...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a full convolutional network (FCN) semantic segmentation method based on multi-scale low-level feature fusion. The method comprises firstly extracting the dense feature of an input image by using the FCN; and then subjecting the extracted feature images to multi-scale feature fusion, including steps of subjecting the input feature images to multi-scale pooling to form a plurality of processing branches; then performing low-level feature fusion on the pooled scale-invariant feature maps in respective branches and performing low-level feature fusion upsampling on the pooled scale-down feature maps in respective branches; inputting the feature maps into a 3*3 convolutional layer to learn deeper features and reduce the number of channels of output feature maps; then combining the output feature maps of respective branches in a channel number splicing manner, and obtaining a score map having a size the same as that of an original image by a class convolutional layerand bilinear interpolation upsampling. In combination with local low-level feature information and global multi-scale image information, the effect of image semantic segmentation is significant.

Description

technical field [0001] The invention relates to the technical fields of machine learning and computer vision, in particular to a full convolutional network semantic segmentation method based on fusion of multi-scale low-level features. Background technique [0002] In recent years, with the development of science and technology, the performance of computers has been rapidly improved, and the fields of machine learning, computer vision, and artificial intelligence have also developed rapidly. Image semantic segmentation has also become an important research topic. The so-called image semantic segmentation is to divide an image into several small blocks according to its own established standards. The pixels inside each small block have certain correlations, and the semantics of each block are marked, such as: sky, grassland , sofa, bed and more. From a technical point of view, image semantic segmentation is similar to aggregation in data processing to a certain extent, cluste...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06K9/62G06K9/46G06N3/04G06N3/08
CPCG06N3/08G06T7/10G06V10/50G06N3/045G06F18/24
Inventor 罗荣华陈俊生
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products