Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Semantic segmentation of road scene based on multi-scale perforated convolutional neural network

A convolutional neural network and semantic segmentation technology, applied in the field of semantic segmentation of road scenes based on multi-scale atrous convolutional neural network, can solve problems such as low segmentation accuracy, reduced image feature information, and rough restored edge information

Active Publication Date: 2019-03-22
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Most of the existing road scene semantic segmentation methods use deep learning methods, which use the combination of convolutional layers and pooling layers. As a result, the feature information of the obtained image is reduced, and eventually the restored edge information is rough and the segmentation accuracy is low.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation of road scene based on multi-scale perforated convolutional neural network
  • Semantic segmentation of road scene based on multi-scale perforated convolutional neural network
  • Semantic segmentation of road scene based on multi-scale perforated convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0053] A road scene semantic segmentation method based on a multi-scale atrous convolutional neural network proposed by the present invention, its overall realization block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase;

[0054] The specific steps of the described training phase process are:

[0055] Step 1_1: Select Q original road scene images and the real semantic segmentation images corresponding to each original road scene image, and form a training set, and record the qth original road scene image in the training set as {I q (i,j)}, combine the training set with {I q (i, j)} corresponding to the real semantic segmentation image is denoted as Then, the existing one-hot encoding technology (one-hot) is used to process the real semantic segmentation images corresponding to each or...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a road scene semantic segmentation method based on a multi-scale perforated convolutional neural network. In a training stage, a multi-scale perforated convolutional neural network is constructed. The hidden layer thereof comprises nine neural network blocks, five cascade layers and six up-sampling blocks. The original road scene images are input into the multi-scale perforated convolutional neural network for training, and 12 corresponding semantic segmentation prediction maps are obtained. By calculating the loss function value between the set of 12 semantic segmentation prediction maps corresponding to the original road scene image and the set of 12 monothermally coded images processed from the corresponding real semantic segmentation image, The optimal weight vector and bias term of multi-scale perforated convolution neural network classification training model are obtained. In the testing phase, the road scene images to be segmented are input into the multi-scale perforated convolutional neural network classification training model, and the predictive semantic segmentation images are obtained. The invention has the advantages of improving the efficiencyand accuracy of the semantic segmentation of the road scene images.

Description

technical field [0001] The invention relates to a deep learning semantic segmentation method, in particular to a road scene semantic segmentation method based on a multi-scale perforated convolutional neural network. Background technique [0002] With the rapid development of the intelligent transportation industry, road scene understanding has been more and more widely used in intelligent transportation of assisted driving and unmanned driving systems. One of the most challenging tasks in autonomous driving is road scene understanding, including lane detection and semantic segmentation under computer vision tasks. Lane detection helps guide the vehicle, and semantic segmentation provides more detailed location of objects in the surrounding environment. Semantic segmentation is an important direction of computer vision. Its essence is to classify images at the pixel level. Its application in road scene understanding is to segment objects including roads, cars, pedestrians, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06N3/04G06N3/08
CPCG06N3/08G06V20/35G06V10/267G06N3/045
Inventor 周武杰顾鹏笠潘婷吕思嘉钱亚冠向坚
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products