Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Traffic image semantic segmentation method based on multi-feature map

A semantic segmentation, multi-feature technology, applied in the field of computer vision and pattern recognition, can solve the problem of low accuracy of semantic segmentation, and achieve the effect of improving accuracy

Inactive Publication Date: 2018-11-02
DALIAN UNIV OF TECH
View PDF3 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current semantic segmentation method for traffic images based on RGB images fails to make full use of the feature information of the image, resulting in low semantic segmentation accuracy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Traffic image semantic segmentation method based on multi-feature map
  • Traffic image semantic segmentation method based on multi-feature map
  • Traffic image semantic segmentation method based on multi-feature map

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The specific embodiments of the present invention will be described in detail below in conjunction with the technical solutions and accompanying drawings.

[0047] Such as figure 1 As shown, a traffic image semantic segmentation method based on multi-feature maps includes the following steps:

[0048] A. Obtain multi-feature map training samples

[0049] A1. Get if figure 2 The disparity map shown;

[0050] A2. Get if image 3 The heightmap shown;

[0051] A3. Get if Figure 4 The angle diagram shown;

[0052] B. Construct a network model and train the constructed network model

[0053] B1, build as Figure 5 The network model shown;

[0054] The input of the network model is a feature map and a color map, and the feature map includes a disparity map, a height map, and an angle map. The input of the network model is disparity map, height map, angle map and color map from left to right. The network model consists of an encoder network and a decoder network. Th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a traffic image semantic segmentation method based on a multi-feature map. The method comprises the following steps: obtaining a multi-feature map training sample: a disparitymap, a height map and an angle map; constructing a network model, training the network model, inputting the trained network model and a six-channel test image into the network model, outputting a probability value that each pixel belongs to each object category in the six-channel image via a multi-class classifier softmax layer, then predicting the object category to which each pixel in the six-channel image belongs, and finally outputting an image semantic segmentation map. By adoption of the traffic image semantic segmentation method based on the multi-feature map provided by the invention,the fusion of a color image with a depth map, the height map and the angle map, more feature information of the image can be obtained, and it is conducive to understanding the road traffic scene and improving the semantic segmentation accuracy. According to the traffic image semantic segmentation method based on the multi-feature map provided by the invention, by means of the learned effective features, the object category to which each pixel in the image belongs can be predicted, and the image semantic segmentation map is output.

Description

technical field [0001] The invention relates to the fields of computer vision and pattern recognition, in particular to a method for semantic segmentation of multi-feature map traffic images based on stereo vision and a deep convolutional neural network model. Background technique [0002] The understanding and perception of traffic scenes is of great significance in obstacle detection, traversability estimation, and path planning of smart cars. However, the road traffic scene has the characteristics of complexity, variability and uncertainty, which makes the perception and understanding of the traffic scene difficult and the accuracy is low. Before the emergence of deep convolutional neural networks, the perception and understanding of traffic scenes were mostly realized by manually designing feature extractors to classify traffic scenes, which needed to be realized by writing complex programs, and were mainly effective for specific categories. universal. [0003] In rece...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06T7/80G06N3/04G06N3/08
CPCG06N3/084G06T7/11G06T7/85G06N3/045
Inventor 连静孔令超郑伟娜李琳辉周雅夫
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products