Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An image semantic segmentation method based on an area and depth residual error network

A residual and network technology, applied in the field of computer vision, can solve problems such as rough segmentation boundaries and achieve good segmentation results

Active Publication Date: 2019-04-26
JIANGXI UNIV OF SCI & TECH
View PDF6 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The region-based semantic segmentation method uses multiple scales to extract overlapping regions, which can identify targets of multiple scales and obtain fine object segmentation boundaries; the method based on full convolutional networks uses convolutional neural networks to learn features independently, which can target pixel by pixel Classification tasks are trained end-to-end, but this approach often produces rough segmentation boundaries

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image semantic segmentation method based on an area and depth residual error network
  • An image semantic segmentation method based on an area and depth residual error network
  • An image semantic segmentation method based on an area and depth residual error network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments. A method for image semantic segmentation based on region and depth residual network, its specific implementation steps are as follows:

[0026] (S1): Extract candidate regions.

[0027] On the basis of Selective Search, use over-segmentation to divide the original image into multiple original regions, calculate the similarity between regions according to the color, texture, size and overlap of regions, and merge the most similar regions in turn, and repeat this process all the time Operate until merged into one area, so as to obtain candidate areas of different levels, and filter a certain number of candidate areas by setting the minimum size of the area. In the SIFT FLOW dataset and the PASCAL Context dataset, the minimum size set by the present invention is 100 pixels and 400 pixels respectively, and finally the average number of candidate reg...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image semantic segmentation method based on a region and a deep residual network. According to the region-based semantic segmentation method, mutually overlapped regions areextracted by using multiple scales, targets of multiple scales can be identified, and fine object segmentation boundaries can be obtained. According to the method based on the full convolutional network, the convolutional neural network is used for autonomously learning features, end-to-end training can be carried out on a pixel-by-pixel classification task, but rough segmentation boundaries areusually generated in the method. The advantages of the two methods are combined: firstly, a candidate region is generated in an image by using a region generation network, then feature extraction is performed on the image through a deep residual network with expansion convolution to obtain a feature map, the feature of the region is obtained by combining the candidate region and the feature map, and the feature of the region is mapped to each pixel in the region; And finally, carrying out pixel-by-pixel classification by using the global average pooling layer. In addition, a multi-model fusionmethod is used, different inputs are set in the same network model for training to obtain a plurality of models, and then feature fusion is carried out on the classification layer to obtain a final segmentation result. Experimental results on SIFT FLOW and PASCAL Context data sets show that the algorithm provided by the invention has relatively high average accuracy.

Description

technical field [0001] The invention belongs to the field of computer vision and relates to digital image preprocessing, model improvement, image semantic segmentation and simulation realization. Background technique [0002] Image semantic segmentation combines image segmentation and target recognition tasks. Its purpose is to divide the image into several groups of regions with specific semantic meanings, and mark the category of each region, so as to realize the reasoning process from the bottom layer to the high-level semantics, and finally obtain a A segmented image with pixel semantic annotation, that is, assigning a label to each pixel in the image representing its semantic target category. Image semantic segmentation has a wide range of applications in life, such as autonomous driving, geographic information systems, medical image analysis, and wearable application systems such as virtual or augmented reality. More and more emerging application fields require accura...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/34G06K9/62
CPCG06V10/267G06F18/24G06F18/214
Inventor 罗会兰卢飞余乐陶
Owner JIANGXI UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products