Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Ground object coverage rate calculation method based on full convolutional network and conditional random field

A technology of conditional random field and full convolutional network, which is applied in the field of ground object coverage calculation based on full convolutional network and conditional random field, can solve the problem of inability to count the coverage information of ground objects and the inability to accurately and quickly count remote sensing images Object coverage information and other issues, to achieve the effect of fast calculation speed, wide adaptability and high accuracy

Inactive Publication Date: 2019-08-09
成都图必优科技有限公司
View PDF4 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The technical problem to be solved by the present invention is that it is currently impossible to accurately and quickly count the coverage information of ground objects through remote sensing images. The problem of quickly counting the coverage rate information of ground objects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Ground object coverage rate calculation method based on full convolutional network and conditional random field
  • Ground object coverage rate calculation method based on full convolutional network and conditional random field
  • Ground object coverage rate calculation method based on full convolutional network and conditional random field

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] Such as figure 1 As shown, the present invention is based on a fully convolutional network and a conditional random field ground object coverage calculation method, comprising the following steps:

[0031] S1. Construct a fully convolutional neural network:;

[0032] S2. Make training data: Segment the collected remote sensing images pixel by pixel according to the category to be segmented, perform data enhancement on the remote sensing images, and construct a semantic segmentation dataset;

[0033] S3. Training the fully convolutional neural network: input the semantic segmentation data set obtained in step S2 into the fully convolutional neural network constructed in step S11, continuously iteratively train, and update network parameters until the training results meet the preset convergence conditions;

[0034] S4. Remote sensing image segmentation: using the fully convolutional neural network trained in step S3 to perform semantic segmentation on the image to be se...

Embodiment 2

[0040] Based on Embodiment 1, the fully convolutional neural network constructed in step S1 is based on the ResNet-50 convolutional neural network, and a parallel atrous convolution module with different expansion rates is added to make the model an image segmentation function. network model.

[0041] The network model structure is as figure 2 As shown, the specific structure is as follows:

[0042] The connections from input to output are: a convolutional layer, a pooling layer, 4 residual structure block modules, a parallel atrous convolution module, and a 1×1 convolutional layer.

[0043] The purpose of adding the residual structure is to extract features better. The parallel atrous convolution module is designed to extract more scale feature information and improve the segmentation results. The introduction of 1×1 convolution is to make the input image size unlimited. and preserve spatial information.

[0044] The size of the first convolutional layer is 3×3, the numbe...

Embodiment approach

[0048] Based on the above-mentioned embodiment, in step S2, the training method adopted by the network model involved in the present invention is supervised training, and it is necessary to provide a large amount of training data with groundtruth labels for the training process. The specific implementation is as follows:

[0049] S2.1. Marking the collected remote sensing data images pixel by pixel according to the category to be segmented;

[0050] S2.2. Using the sliding window cutting algorithm to cut the marked remote sensing image into labeled sub-image blocks with a size of 256×256;

[0051] S2.3. Rotate these sub-image blocks by 90°, 180°, and 270°, mirror up and down, left and right, scale 0.5 times, 1.5 times, 2 times, and add Gaussian and salt-and-pepper noise to enhance the data. The volume is expanded to 16 times of the original;

[0052] S2.4. Randomly divide the enhanced data set into network training data and network test data according to the ratio of 8:2.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a ground object coverage rate calculation method based on a full convolutional network and a conditional random field. The ground object coverage rate calculation method comprises the following steps: constructing the full convolutional neural network; preparing training data, wherein pixel-by-pixel segmentation is conducted on the collected remote sensing images accordingto the category to be segmented; conducting data enhancement on the remote sensing images, and constructing a semantic segmentation data set; training the full convolutional neural network: inputtingthe obtained semantic segmentation data set into the constructed full convolutional neural network, performing continuous iterative training, and updating network parameters until a training result meets a preset convergence condition; performing remote sensing image segmentation: performing semantic segmentation on the to-be-segmented image by using the trained full convolutional neural network to obtain preliminary segmentation results of various ground objects; optimizing a segmentation result; computing surface feature coverage information. Professional software is not needed, and the problem which cannot be solved by a traditional segmentation algorithm can be well solved.

Description

technical field [0001] The invention relates to a remote sensing image ground object coverage algorithm, in particular to a ground object coverage calculation method based on a full convolution network and a conditional random field. Background technique [0002] Ground object coverage information is an important part of remote sensing image information. Most of the existing methods use professional software such as ENVI to roughly count the coverage rate according to the multispectral information of remote sensing images or directly use remote sensing images to count the ground object coverage rate; currently directly use Traditional remote sensing image segmentation methods are needed to count the coverage of ground objects in remote sensing images. However, traditional remote sensing image segmentation methods such as methods using brightness, texture and other features, methods using K-means clustering, HOG methods, etc. often need to add Excessive prior information even...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V20/176G06V20/182G06V20/13G06V20/188G06N3/045
Inventor 段昶罗兴奕朱策
Owner 成都图必优科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products