Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic segmentation method in automatic driving scene based on BiSeNet

A technology for semantic segmentation and automatic driving, applied in combustion engines, internal combustion piston engines, instruments, etc., can solve problems such as time-consuming, and achieve the effect of small model size, high accuracy and good convergence.

Active Publication Date: 2020-12-11
FUZHOU UNIV
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The disadvantage of deep learning is that it requires a large amount of labeled data, so it takes a lot of time, but the flaws are not hidden

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation method in automatic driving scene based on BiSeNet
  • Semantic segmentation method in automatic driving scene based on BiSeNet
  • Semantic segmentation method in automatic driving scene based on BiSeNet

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0047] Please refer to figure 1 , the present invention provides a semantic segmentation method based on a BiSeNet-based automatic driving scene, comprising the following steps:

[0048] Step S1: collecting urban street image data and preprocessing;

[0049] Step S2: labeling the preprocessed image data to obtain the labeled image data;

[0050] Step S3: performing data enhancement on the labeled image data, and using the enhanced image data as a training set;

[0051] Step S4: Construct the BiSeNet neural network model, and train the model based on the training set;

[0052] Step S5: Preprocess the video information collected by the camera, and perform semantic segmentation on the city streets in the camera according to the trained BiSeNet neural network model.

[0053] Further, the step S1 is specifically:

[0054] Step S11: analyze the categorie...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a semantic segmentation method in an automatic driving scene based on BiSeNet, and the method comprises the following steps: S1, collecting urban street image data, and carrying out the preprocessing; S2, labeling the preprocessed image data to obtain labeled image data; S3, performing data enhancement on the labeled image data, and taking the enhanced image data as a training set; S4, constructing a BiSeNet neural network model, and training the model based on the training set; and S5, preprocessing the video information acquired by the camera, and performing semanticsegmentation on the urban streets in the camera according to the trained BiSeNet neural network model. The safety of automatic driving and the accuracy and rapidity of road scene segmentation can beeffectively improved.

Description

technical field [0001] The invention relates to the fields of pattern recognition and computer vision, in particular to a semantic segmentation method in a BiSeNet-based automatic driving scene. Background technique [0002] Semantic image segmentation is an essential part of modern autonomous driving systems, as an accurate understanding of the scene around the car is critical for navigation and action planning. Semantic segmentation can help autonomous vehicles identify drivable areas in an image. Since the emergence of Fully Convolutional Networks (FCN, Fully Convolutional Networks), convolutional neural networks have gradually become the mainstream method for processing semantic segmentation tasks, many of which are directly borrowed from convolutional neural network methods in other fields. In the past ten years, many scholars have made great efforts in the creation of semantic segmentation datasets and algorithm improvement. Thanks to the development of deep learning...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06K9/62G06N3/04
CPCG06V20/40G06V20/56G06V10/267G06N3/045G06F18/214Y02T10/40
Inventor 柯逍蒋培龙黄艳艳
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products