Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Environment semantic mapping method based on deep convolutional neural network

A neural network and deep convolution technology, applied in the field of environmental semantic mapping based on deep convolutional neural networks, can solve the problems of time-consuming and resource-consuming, and achieve high composition efficiency and improved accuracy

Active Publication Date: 2019-04-16
NORTHEASTERN UNIV
View PDF4 Cites 68 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There are also proposals in the industry to use dense Elastic Fusion to build a 3D environment map, and use a deconvolutional neural network for image segmentation. This method uses dense Elastic Fusion to build maps, which is relatively time-consuming and resource-intensive.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Environment semantic mapping method based on deep convolutional neural network
  • Environment semantic mapping method based on deep convolutional neural network
  • Environment semantic mapping method based on deep convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 2

[0103] The hardware configuration of the corresponding application device in this implementation method is Nvidia GTX Titan Xp server, and the tested system is Ubuntu14.04. Each data set is initialized with pre-trained network weights. Other parameters are shown in Table 1, where ε is the optimization parameter of the optimizer.

[0104] Table 1 Experimental parameters of each data set

[0105]

[0106] Step 1: Since the system in this paper provides a depth image, it can be directly aligned with the color image for segmentation, pose estimation, and 3D reconstruction. In order to test the effect of the semantic segmentation algorithm proposed in this paper, the training parameters were trained on the outdoor scene CityScapes (19 categories) dataset, the indoor scene NYUv2 dataset (41 categories) and the PASCAL VOC 2012 dataset (21 categories). Among them, the NYUv2 dataset provides information that can be used as a visual odometry. The magnitude of the labeled images in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an environmental semantic mapping method based on a deep convolutional neural network, and the method can build an environmental map containing object category information by combining the advantages of deep learning in the aspect of scene recognition with the autonomous positioning advantages of an SLAM technology. In particular, ORB-is utilized Carrying out key frame screening and inter-frame pose estimation on the input image sequence by the SLAM; Carrying out two-dimensional semantic segmentation by utilizing an improved method based on Deeplab image segmentation; Introducing an upper sampling convolutional layer behind the last layer of the convolutional network; and using the depth information as a threshold signal to control selection of different convolutionkernels, aligning the segmented image and the depth map, and constructing a three-dimensional dense semantic map by using a spatial corresponding relationship between adjacent key frames. According tothe scheme, the image segmentation precision can be improved, and higher composition efficiency is achieved.

Description

technical field [0001] The invention relates to the fields of digital image processing and computer vision, in particular to an environment semantic mapping method based on a deep convolutional neural network. Background technique [0002] Accurately obtaining environmental information is a key link for mobile robots to perform tasks autonomously. When the robot is moving, it can form a description of the surrounding environment through SLAM technology, that is, an environmental map. However, traditional SLAM composition only considers geometric data, and cannot obtain the category and type of objects in the map. The information provided is insufficient and the distinction of features is relatively weak. Semantic information includes object category, target detection and semantic segmentation, etc., which can understand the content of the scene and help the robot perform tasks in a goal-oriented manner. Therefore, the combination of the two is an inevitable requirement. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/05G06T19/20G06T7/11G06T7/38
CPCG06T7/11G06T7/38G06T17/05G06T19/20G06T2219/2012G06T2207/20084G06T2207/20016G06T2207/10024G06T2207/10028
Inventor 张云洲胡美玉秦操张维智张括嘉张珊珊
Owner NORTHEASTERN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products