Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network

A multi-scale fusion, remote sensing image technology, applied in biological neural network models, instruments, character and pattern recognition, etc., can solve the problem of low cloud detection accuracy, achieve fine segmentation, improve cloud detection accuracy, and reduce cloud area detection errors. Effect

Pending Publication Date: 2019-08-13
HARBIN INST OF TECH
View PDF3 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The purpose of the present invention is to solve the problem of low cloud detection accuracy existing in the existing cloud detection method by manually extracting features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
  • Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
  • Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0038] Specific implementation mode one: as figure 1 As shown, the remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network described in this embodiment comprises the following steps:

[0039] Step 1. Randomly select N from the real panchromatic visible light remote sensing image data set 0 Zhang as the original remote sensing image;

[0040] to N 0 The original remote sensing images are preprocessed to obtain N 0 A preprocessed remote sensing image;

[0041] The data set used in step 1 is: the 2-meter resolution real panchromatic visible light remote sensing image data set taken by Gaofen-1 satellite;

[0042] Step two, put N 0 A preprocessed remote sensing image is used as a training set and input to a weighted multi-scale fusion network (WMSFNet) for training. During the training process, the convolution kernel parameters of the convolutional layer in the semantic segmentation network are continuously updated until the set m...

specific Embodiment approach 2

[0066] Specific embodiment two: the difference between this embodiment and specific embodiment one is: the pair N 0 The original remote sensing images are preprocessed to obtain N 0 A preprocessed remote sensing image, the specific process is as follows:

[0067] For any original remote sensing image, calculate the mean value M of the grayscale of each channel of the original remote sensing image, and then use the grayscale of each pixel in the original remote sensing image to subtract the mean value M to obtain the original remote sensing image The corresponding preprocessed remote sensing image, that is, the gray value of each pixel in the preprocessed remote sensing image corresponding to the original remote sensing image:

[0068] O'(i,j)=O(i,j)-M (1)

[0069] Among them: O(i,j) is the gray value of the pixel point (i,j) in the original remote sensing image, and O′(i,j) is the pixel in the preprocessed remote sensing image corresponding to the original remote sensing ima...

specific Embodiment approach 3

[0071] Specific implementation mode three: as figure 2 and image 3 As shown, the difference between this embodiment and the second embodiment is that the specific process of the second step is:

[0072] Will N 0 A preprocessed remote sensing image is used as a training set to input the semantic segmentation network. Before starting the training, the network parameters of the semantic segmentation network need to be initialized, and the training process can be started after the initialization of the network parameters is completed;

[0073] The semantic segmentation network includes 15 convolutional layers, 5 pooling layers, 2 deconvolution layers and 2 clipping layers, which are respectively:

[0074] Two convolution kernels with a size of 3*3 and a convolution layer with a number of convolution kernels of 64;

[0075] A pooling layer with a convolution kernel size of 2*2 and a number of convolution kernels of 64;

[0076] Two convolution kernels with a size of 3*3 and a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a remote sensing image cloud detection method based on a multi-scale fusion semantic segmentation network, and belongs to the technical field of remote sensing image cloud detection. According to the invention, the problem of low cloud detection precision of an existing method for carrying out cloud detection by manually extracting features is solved. The method comprises the following steps of extracting shallow features by using the first three levels of sub-networks; extracting deep features by using the last two stages of sub-networks; fusing the extracted deep features with the shallow features; thus, fully using rich detail information contained in shallow features and rich semantic information contained in deep feature. Therefore, the advantages of the deepfeature boundary and the shallow feature boundary are fused. Segmentation of the deep feature boundary is finer. The best cloud detection effect is achieved by optimizing the proportion of the deep feature to the shallow feature, and the cloud area detection error is smaller than 1%. The method can be applied to the technical field of remote sensing image cloud detection.

Description

technical field [0001] The invention belongs to the technical field of remote sensing image cloud detection, and in particular relates to a remote sensing image cloud detection method. Background technique [0002] Remote sensing is an important means of obtaining earth resources and environmental information, and cloud is the main factor affecting the quality of satellite remote sensing images. Generally, 50% of the earth's surface is covered by clouds, and the existence of clouds brings great inconvenience to remote sensing image processing. Remote sensing images covered by clouds have less available information, but occupy a large amount of storage space and transmission bandwidth of the system, thereby reducing the utilization rate of satellite data. At present, except for synthetic aperture radar sensors that can penetrate clouds to obtain surface information, other sensors have not completely solved the problem of cloud coverage in remote sensing images, and most of t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/34G06K9/62G06N3/04
CPCG06V20/13G06V10/267G06N3/045G06F18/253G06F18/214Y02A90/10
Inventor 彭宇郭玥于希明马宁姚博文刘大同彭喜元
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products