Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scene semantic segmentation method based on full convolution and long and short term memory units

A long-term and short-term memory and semantic segmentation technology, applied in the field of image semantic segmentation and deep learning, can solve the problems of over-segmentation of objects and low accuracy of scene image segmentation, and achieve the effect of solving low accuracy and improving accuracy.

Inactive Publication Date: 2017-12-15
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF4 Cites 49 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide a method for scene semantic segmentation based on full convolution and long-term and short-term memory units

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene semantic segmentation method based on full convolution and long and short term memory units
  • Scene semantic segmentation method based on full convolution and long and short term memory units
  • Scene semantic segmentation method based on full convolution and long and short term memory units

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] In order to make the objectives, technical solutions, and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with specific implementation and drawings.

[0028] Such as figure 1 As shown, the specific steps of the scene semantic segmentation method based on full convolution and long short-term memory unit in this embodiment are as follows:

[0029] S1: Build a deep neural network based on full convolution, multi-scale fusion and long and short-term memory units.

[0030] Such as figure 2 As shown, the basic structure of the front-end convolutional neural network module is modified from VGG-16. The main components of VGG-16 are 5 group convolutional layers, 3 fully connected layers, and 1 Softmax layer. In this embodiment Use the front-end network to use the convolutional layer of the first 5 groups of VGG-16, and remove the pooling layer of the 4th and 5th group and the last 3 fully connected layers. Am...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a scene semantic segmentation method based on full convolution and a long-short term memory unit, relating to the technical field of image processing. The method includes the following steps of S1 constructing a deep neural network based on full convolution, a pyramid pooling module and long-short term memory unit module; S2 comparing a predictive image with a marked image, training by taking the Softmax loss as the objective function and the stochastic gradient descent as the optimization method, and updating the weight of the deep neural network obtained in step 1; S3 carrying out the S2 for many times, and completing the training until the loss is decreased to the limitation; and S4 inputting a new scene image to the trained deep neural network, and performing the bilinear interpolation to the original image resolution to obtain the semantic segmentation result of the scene. The method solves the problems that the current scene image segmentation is low in accuracy, and objects in the image are subjected to over-segmentation and under-segmentation.

Description

Technical field [0001] The present invention relates to the field of image semantic segmentation and deep learning, in particular to a scene semantic segmentation method based on full convolution and long and short-term memory units. Background technique [0002] Scene semantic segmentation should belong to the application of image semantic segmentation on scene images. Scene semantic segmentation plays a vital role in subsequent computer vision tasks, such as the distinction between road and non-road scenes in unmanned driving video analysis. Scene semantic segmentation is generally modeled as a pixel-level multi-classification problem, and its goal is to classify each pixel of the image into one of multiple predefined categories. [0003] Traditional scene semantic segmentation methods generally extract artificially designed features, such as texture features, from small windows in the neighborhood of image pixels. At the same time, considering the spatial dependence between im...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/2431
Inventor 程建张建朱晓雅张泽厚
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products