Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Trunk two-way image semantic segmentation method for scene understanding of mobile robot in complex environment

A mobile robot and semantic segmentation technology, applied in the field of image processing, can solve the problems of inability to find a solution, time-consuming and labor-intensive training, and high equipment requirements, and achieve the effects of simple structure, improved accuracy, and improved depth.

Pending Publication Date: 2022-01-18
SHANGHAI UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Scene understanding has been paid attention to in the early research work related to computer vision, but it has been unable to find a more effective solution
There are many difficulties in the research of scene understanding, such as: how to obtain the robust features of the target object when the target object in the scene is affected by translation, rotation, illumination or distortion, etc., in order to achieve better segmentation effect, research Personnel usually design complex structures for semantic segmentation models based on deep learning to improve segmentation accuracy, such as ASPP modules, etc., but complex structures usually slow down the running speed of the model. In order to improve the running speed of the model, many lightweight semantic segmentation models It was proposed, but the accuracy of the lightweight model has a certain gap compared with the precise model, the structure is usually special, it is difficult to improve or may need to be pre-trained on the ImageNet dataset after improvement, the equipment requirements are high, and the training is time-consuming and laborious

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Trunk two-way image semantic segmentation method for scene understanding of mobile robot in complex environment
  • Trunk two-way image semantic segmentation method for scene understanding of mobile robot in complex environment
  • Trunk two-way image semantic segmentation method for scene understanding of mobile robot in complex environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] A backbone two-way image semantic segmentation method for scene understanding of mobile robots in complex environments, comprising the following steps:

[0052] S1: Input the image to be segmented into the encoder of the image semantic segmentation model (the network architecture diagram of the encoder is shown in figure 1 As shown), the initial feature map of the image to be segmented is extracted by the initial module of the encoder, and the initial feature map is obtained. The spatial size of the initial feature map is 1 / 2 of the image to be segmented; and then the initial feature map is input into the high The resolution branch and the downsampling branch perform processing; wherein, the initial module stem module.

[0053] S2: Input the initial feature map of the high-resolution branch through the residual network (the residual network is a ResNet18 network) for feature extraction, and obtain a first-level high-resolution feature map with the same spatial size as t...

Embodiment 2

[0075] This embodiment mainly provides a method for training an image semantic segmentation model described in the above-mentioned embodiment 1, and the steps of the method are:

[0076] A: Obtain a sample image set, the sample image set includes a plurality of sample images, the sample image contains the sample segmentation area and the sample category information corresponding to the sample segmentation area; the sample image set is randomly divided into training set, verification Set and test set; The sample images in the sample image set are from at least one of ImageNet dataset, Cityscapesdataset, ADE20K dataset three image datasets.

[0077] B: Input the sample image in the training set into the pre-built image semantic segmentation model for detection, and obtain the semantic segmentation result of the sample image, the semantic segmentation result includes the feature area and feature area of ​​the sample image obtained based on semantic recognition Corresponding categ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of image processing, and discloses a trunk two-way image semantic segmentation method, which specifically comprises the following steps: inputting an image into a semantic segmentation model for feature extraction to obtain an initial feature map; after the initial feature map is processed by a residual network, performing primary semantic fusion on the initial feature map and the initial feature map subjected to down-sampling and residual network processing to obtain a primary fusion high-resolution feature map and a primary fusion low-resolution feature map; performing residual network processing on the primary fusion high-resolution feature map, and performing secondary semantic fusion on the primary fusion high-resolution feature map and a primary fusion low-resolution feature map subjected to down-sampling and residual network processing; and obtaining a third-level fusion high-resolution feature map and a third-level fusion low-resolution feature map; processing the three-level fused high-resolution feature map through a residual network; carrying out third semantic fusion on the three-level fused high-resolution feature map and the three-level fusion low-resolution feature map subjected to down-sampling and residual network processing; and obtaining a five-level fusion feature map; and carrying out up-sampling on the five-level fusion feature map through a decoder to obtain an image semantic segmentation result.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a trunk two-way image semantic segmentation method for scene understanding of mobile robots in complex environments. Background technique [0002] For mobile robots, scene understanding is the core technology for realizing real intelligence. Its scene understanding ability depends on the high-precision semantic segmentation algorithm for scene analysis. A service robot with scene understanding ability has the ability of scene semantic segmentation. After moving the base and high-precision robotic arm, it can further realize advanced tasks such as autonomous navigation, object delivery, and indoor security. [0003] Scene understanding has been paid attention to in the early research work related to computer vision, but it has been unable to find a more effective solution. There are many difficulties in the research of scene understanding, such as: how to obtain the robu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06N3/045G06F18/213G06F18/214G06F18/241G06F18/253
Inventor 李恒宇程立刘靖逸岳涛王曰英谢少荣罗均
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products