Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Semantic Image Segmentation Method Based on Region and Deep Residual Networks

A residual and network technology, applied in the field of computer vision, can solve problems such as rough segmentation boundaries and achieve good segmentation results

Active Publication Date: 2022-05-03
JIANGXI UNIV OF SCI & TECH
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The region-based semantic segmentation method uses multiple scales to extract overlapping regions, which can identify targets of multiple scales and obtain fine object segmentation boundaries; the method based on full convolutional networks uses convolutional neural networks to learn features independently, which can target pixel by pixel Classification tasks are trained end-to-end, but this approach often produces rough segmentation boundaries

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Semantic Image Segmentation Method Based on Region and Deep Residual Networks
  • A Semantic Image Segmentation Method Based on Region and Deep Residual Networks
  • A Semantic Image Segmentation Method Based on Region and Deep Residual Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments. A method for image semantic segmentation based on region and depth residual network, its specific implementation steps are as follows:

[0026] (S1): Extract candidate regions.

[0027] On the basis of Selective Search, use over-segmentation to divide the original image into multiple original regions, calculate the similarity between regions according to the color, texture, size and overlap of regions, and merge the most similar regions in turn, and repeat this process all the time Operate until merged into one area, so as to obtain candidate areas of different levels, and filter a certain number of candidate areas by setting the minimum size of the area. In the SIFT FLOW dataset and the PASCAL Context dataset, the minimum size set by the present invention is 100 pixels and 400 pixels respectively, and finally the average number of candidate reg...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image semantic segmentation method based on region and depth residual network. The region-based semantic segmentation method uses multiple scales to extract overlapping regions, which can identify targets of multiple scales and obtain fine object segmentation boundaries. Fully convolutional network-based methods use convolutional neural networks to learn features autonomously, which can be trained end-to-end for pixel-by-pixel classification tasks, but this method usually produces rough segmentation boundaries. The present invention combines the advantages of the two methods: first, the region generation network is used to generate candidate regions in the image, and then the image is extracted through a deep residual network with dilated convolution to obtain a feature map, and the candidate region and the feature map are combined to obtain The features of the region are mapped to each pixel in the region; finally, the global average pooling layer is used for pixel-by-pixel classification. The present invention also uses a method of multi-model fusion, setting different inputs in the same network model for training to obtain multiple models, and then performing feature fusion at the classification layer to obtain the final segmentation result. Experimental results on SIFT FLOW and PASCAL Context data sets show that the algorithm of the present invention has a higher average accuracy.

Description

technical field [0001] The invention belongs to the field of computer vision and relates to digital image preprocessing, model improvement, image semantic segmentation and simulation realization. Background technique [0002] Image semantic segmentation combines image segmentation and target recognition tasks. Its purpose is to divide the image into several groups of regions with specific semantic meanings, and mark the category of each region, so as to realize the reasoning process from the bottom layer to the high-level semantics, and finally obtain a A segmented image with pixel semantic annotation, that is, assigning a label to each pixel in the image representing its semantic target category. Image semantic segmentation has a wide range of applications in life, such as autonomous driving, geographic information systems, medical image analysis, and wearable application systems such as virtual or augmented reality. More and more emerging application fields require accura...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/26G06V10/774G06V10/766G06V10/764G06K9/62
CPCG06V10/267G06F18/24G06F18/214
Inventor 罗会兰卢飞余乐陶
Owner JIANGXI UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products