Network model capable of jointly realizing semantic segmentation and depth-of-field estimation and training method

A technology of semantic segmentation and network model, applied in biological neural network model, character and pattern recognition, computing and other directions, can solve the problem that the single-task model cannot simultaneously perform semantic segmentation and depth estimation, and the single-task model has an unsatisfactory effect of concentration and computing. problem of large quantity

Active Publication Date: 2020-06-30
NANJING UNIV OF POSTS & TELECOMM
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Purpose of the invention: The purpose of this application is to provide a network model and training method that can jointly realize semantic segmentation and depth estimation, which is used to solve the problem that the single-task model cannot perform semantic segmentation and depth estimation at the same time in the prior art. The defects of unsatisfactory force concentration effect and large amount of calculation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network model capable of jointly realizing semantic segmentation and depth-of-field estimation and training method
  • Network model capable of jointly realizing semantic segmentation and depth-of-field estimation and training method
  • Network model capable of jointly realizing semantic segmentation and depth-of-field estimation and training method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0061] The present invention will be further described below in conjunction with accompanying drawing and embodiment:

[0062] On the one hand, this application provides a network model that can jointly implement semantic segmentation and depth estimation, such as figure 1 As shown, before the image is input into the model, the image is initially extracted through 3*3 standard convolution to obtain the input image. The network model in this embodiment includes:

[0063] The feature sharing module is configured to perform feature extraction on the input image through a convolutional neural network to obtain shared features. Specifically, the feature sharing module adopts an encoding-decoding ((encoder-decoder)) structure, including an encoding unit and a decoding unit. The output of the encoding unit is used as the input of the decoding unit. The encoding unit performs feature encoding and down-sampling processing on the input image, and decodes The unit performs upsampling a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a network model capable of jointly realizing semantic segmentation and depth-of-field estimation, and the network model comprises a feature sharing module and a multi-task sub-network; the multi-task sub-network comprises a plurality of task sub-networks with the same structure for processing different task targets, and comprises a feature screening module, an attention concentration module and a prediction module; the feature screening module screens out features related to the task from the shared features; the attention concentration module is used for improving thecorrelation between the screening features and the task target; and the prediction module is configured to output a processing result of each task target after convolution of the concentrated attention features. The invention further discloses a training method of the model. Back propagation iterative training is carried out on semantic segmentation and depth-of-field estimation. The model provided by the invention is high in accuracy, strong in robustness and light in weight.

Description

technical field [0001] The invention relates to computer vision image processing, in particular to a network model and a training method that can jointly realize semantic segmentation and field depth estimation. Background technique [0002] Semantic segmentation is a typical computer vision problem. It belongs to high-level visual tasks and is an effective way to understand scenes. In a micro sense, semantic segmentation predicts all pixels in the image and marks each pixel with the category label. It is also an important step in achieving fine-grained reasoning. For fine-grained reasoning, the positioning and detection of objects will not only require object category information, but also require additional information about the spatial position of each category, such as center points or borders, so semantic segmentation is an important step in realizing fine-grained reasoning . [0003] In the existing application scenarios of computer vision image processing, such as ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/62G06T7/50G06N3/04
CPCG06T7/50G06V10/267G06N3/045G06F18/253G06F18/214
Inventor 邵文泽张寒波李海波
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products