Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Road scene image processing method based on double-side dynamic cross fusion

A scene image, cross fusion technology, applied in the field of image processing of deep learning, can solve the problems of low segmentation accuracy, reduced image feature information, and unrepresentativeness, so as to improve the accuracy of semantic segmentation, reduce the loss of detailed features, reduce The effect of information loss

Pending Publication Date: 2021-10-01
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Almost all existing unmanned driving road scene understanding methods use deep learning methods, combining convolutional layers and pooling layers to build models. However, the features obtained by purely using pooling and convolution operations are not only single but also not representative. As a result, the extracted image feature information will be reduced, making the restored effect information rough and the segmentation accuracy low.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Road scene image processing method based on double-side dynamic cross fusion
  • Road scene image processing method based on double-side dynamic cross fusion
  • Road scene image processing method based on double-side dynamic cross fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0049] The present invention proposes an unmanned driving road scene understanding method based on bilateral dynamic cross-fusion, and its overall realization block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase;

[0050] The specific steps of the described training phase process are:

[0051] Step 1_1: Select Q pieces of original road scene images and the thermal map (Thermal) and real semantic segmentation images corresponding to each original road scene image, and form a training set, record the qth original road scene image in the training set for {I q (i, j)}, combine the training set with {I q (i, j)} corresponding to the real semantic segmentation image is denoted as Then, the existing one-hot encoding technology (one-hot) is used to process the real semantic segmentation ima...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a road scene image processing method based on double-side dynamic cross fusion. The method comprises two processes of a training stage and a testing stage, and includes selecting road scene images and corresponding thermodynamic diagrams and real semantic understanding images to form a training set; constructing a convolutional neural network; performing data enhancement on the training set to obtain an initial input image pair, and inputting the initial input image pair into a convolutional neural network for processing to obtain a corresponding road scene prediction map; calculating a loss function value between the road scene prediction map and the corresponding real semantic segmentation image; repeating the above steps to obtain a convolutional neural network classification training model; and inputting the road scene image to be subjected to semantic segmentation and the corresponding thermodynamic image into the convolutional neural network classification training model to obtain a corresponding predicted semantic segmentation image. According to the invention, the semantic segmentation accuracy of the road scene image is effectively improved, the loss of detail features is reduced, and the edge of an object can be better restored.

Description

technical field [0001] The present invention relates to a deep learning image processing method, in particular to a road scene image processing method based on bilateral dynamic cross fusion. Background technique [0002] The rise of the intelligent transportation industry has made semantic segmentation more and more widely used in intelligent transportation systems. From traffic scene understanding and multi-target obstacle detection to visual navigation, semantic segmentation technology can be used to achieve. Traditional semantic segmentation mainly relies on image texture, color, and other simple surface features and external structural features for image segmentation. The segmentation results obtained in this way are relatively crude, with low precision and no relevant annotations, that is, the image is only segmented into I have selected several blocks, but I don’t know the category of each block, so I need to specify it manually. The second is the semantic segmentati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06N3/04G06N3/08
CPCG06T7/10G06N3/08G06T2207/20081G06T2207/20084G06T2207/10004G06N3/045
Inventor 周武杰龚婷婷强芳芳许彩娥
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products