Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic segmentation network integrating multi-scale feature space and semantic space

A multi-scale feature and semantic segmentation technology, applied in the field of scene understanding, can solve the problem of not considering the structural differences of different types of adjacent pixel areas without considering the structure of the same continuous pixel area, and achieve the effect of high resolution

Active Publication Date: 2019-03-22
TIANJIN UNIV
View PDF4 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] The purpose of the present invention is to overcome the problem that the existing semantic segmentation method based on pixel-by-pixel classification does not consider the structure of similar continuous pixel regions and the structural differences of different types of adjacent pixel regions, and at the same time, in order to improve the accuracy of small objects and object details Semantic segmentation, a semantic segmentation network that integrates multi-scale feature space and semantic space is proposed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation network integrating multi-scale feature space and semantic space
  • Semantic segmentation network integrating multi-scale feature space and semantic space
  • Semantic segmentation network integrating multi-scale feature space and semantic space

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] In order to improve the semantic segmentation performance of small objects, object details and pixels near the edge, the present invention proposes a semantic segmentation network that integrates multi-scale feature space and semantic space, and realizes an end-to-end high-performance semantic segmentation system based on this network. The network adopts the form of full convolution, which allows the input image to be of any scale. It only needs to be properly supplemented at the edge so that the length and width of the image can be divisible by the maximum downsampling multiple of the network. Among them, the multi-scale feature space refers to the multi-scale feature map generated by the network feature extraction part through multi-layer convolution and downsampling, and the multi-scale semantic space refers to the prediction map obtained by supervising multiple scales of the network. The main structure of the network is as figure 2shown. Our proposed network is ma...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a semantic segmentation network integrating multi-scale feature space and semantic space, comprising: determining a backbone network of a network coding end; VGG16 as a backbone part of the network coding end, removing a fifth pooling layer and reducing one downsampling; Designing the fusion module of feature space and semantic space of network decoder; achieving High resolution and high precision semantic segmentation by using the fusion module of multi-scale feature space and semantic space. Output Semantic Segmentation Result.

Description

technical field [0001] The invention belongs to the scene understanding technology in the fields of computer vision, pattern recognition, deep learning and artificial intelligence, and in particular relates to the technology of pixel-level semantic segmentation of the scene using a deep convolutional neural network in an image or video. Background technique [0002] Such as figure 1 As shown, in order to increase the receptive field of the deep network and reduce the amount of calculation, the backbone part of the existing deep convolutional neural network is usually down-sampled by 1 / 2 times 5 times to 1 / 32 times the input image. After multiple times of downsampling, the features of small objects and the details of objects (such as edge parts) are gradually fused by the surrounding pixel areas, and the distinguishability of small object features continues to decline. Existing representative semantic segmentation methods based on deep neural networks, such as FCN [1], direc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/11
CPCG06T7/11G06T2207/10004G06T2207/20081G06T2207/20084Y02D10/00
Inventor 朱海龙庞彦伟
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products