Real-time image semantic segmentation method based on lightweight convolutional neural network

A convolutional neural network and semantic segmentation technology, which is applied in the field of real-time image semantic segmentation, can solve problems such as hindering practical applications and low reasoning efficiency, and achieve the effects of meeting real-time processing requirements, enhancing discrimination ability, and reducing model parameters

Pending Publication Date: 2021-01-01
SOUTH CHINA UNIV OF TECH
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Various models based on FCN have significantly improved the accuracy of semantic segmentation, but such models usually have millions of model parameters, and the inference efficiency is low, which seriously hinders their practical application

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time image semantic segmentation method based on lightweight convolutional neural network
  • Real-time image semantic segmentation method based on lightweight convolutional neural network
  • Real-time image semantic segmentation method based on lightweight convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0046] A real-time image semantic segmentation method based on a lightweight convolutional neural network, comprising the following steps:

[0047] S1. Construct a lightweight convolutional neural network, including the following steps:

[0048] S1.1. Build a multi-scale processing unit for obtaining multi-scale features of pixels;

[0049] Such as figure 1 As shown, the multi-scale processing unit includes 4 parallel convolutional layer branches, which are standard 1×1 convolutions, and the dilation rate is {r 1 ,r 2 ,r 3}’s 3 dilated convolutions; the dilated convolutions are also depth-wise convolutions; the multi-scale processing unit connects 4 parallel convolutional layer branch outputs in the channel dimension, through a standard 1 The output is obtained after ×1 convolutional mapping; the multi-scale processing unit has a total of 2 convolutional layers.

[0050] S1.2. Use the built multi-scale processing unit to replace the first standard 3×3 convolution of the b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a real-time image semantic segmentation method based on a lightweight convolutional neural network. The method comprises the following steps: constructing a lightweight convolutional neural network; training the constructed lightweight convolutional neural network; and performing semantic segmentation on the image in the given scene by using the trained lightweight neural network. In the constructed convolutional neural network, a multi-path processing mechanism is fused, multi-spatial-scale features of pixels can be effectively coded, and the problem that multi-scale targets are difficult to distinguish is solved. Meanwhile, model parameters are greatly reduced in combination with deep-wise convolution, the constructed lightweight convolutional neural network onlyhas 90 million parameters which are far lower than those of an existing method, the purpose of model lightweight is achieved, and the real-time processing requirement is met. Besides, the lightweightconvolutional neural network is based on a full convolutional network, end-to-end training and reasoning are realized, and the training and deployment process of the model is greatly simplified.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a real-time image semantic segmentation method based on a lightweight convolutional neural network. Background technique [0002] The purpose of image semantic segmentation is to assign a semantic category label to each pixel in the image, which belongs to the pixel-level dense classification task. On the whole, semantic segmentation is one of the basic tasks paving the way for comprehensive scene understanding. More and more applications also acquire knowledge from image data, including autonomous driving, human-computer interaction, indoor navigation, image editing, enhanced reality and virtual reality etc. [0003] Image semantic segmentation methods can be divided into two categories: one is traditional methods, such as threshold-based segmentation, edge-based segmentation, region-based segmentation, graph theory-based segmentation, energy functional-based segmenta...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06N3/04G06N3/08
CPCG06T7/10G06N3/08G06N3/045
Inventor 刘发贵唐泉
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products