Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Remote sensing image semantic segmentation method combining deep learning and random forest

A semantic segmentation and random forest technology, applied in neural learning methods, computer parts, character and pattern recognition, etc., can solve the problems of low segmentation accuracy and limited versatility of methods, so as to improve classification accuracy, improve efficiency and universality. Sex, the effect of less bands

Pending Publication Date: 2020-07-31
CAPITAL NORMAL UNIVERSITY
View PDF6 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this invention can efficiently and quickly realize the pixel-level classification of various ground objects in high-resolution remote sensing images, and simplifies the complex process of traditional classification methods, the segmentation accuracy is low and the generality of the method is limited.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing image semantic segmentation method combining deep learning and random forest
  • Remote sensing image semantic segmentation method combining deep learning and random forest
  • Remote sensing image semantic segmentation method combining deep learning and random forest

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0065] as attached figure 1 As shown, a remote sensing image semantic segmentation method combining deep learning and random forest includes the following steps:

[0066] 101. Create a training data set for the research area, using samples and sample labels as the training data set.

[0067] Acquisition of samples and sample labels: First obtain the GF-2 image, synthesize the red, green and blue bands of the image, and crop the composite image into a sample. The cropping specification is 512*512, as attached figure 2 As shown, mark on the labelme software, as attached image 3 As shown, the saved data is a json file, and the json file is converted into a dataset file. At this time, the label data is 8-bit depth label data, and finally it is given "true color" and converted into a 24-bit depth sample label file. as attached Figure 4 shown.

[0068] The training data set in the research area uses 512*512*24 bit depth samples and sample labels as the specifications of the t...

Embodiment 2

[0109] Based on the above-mentioned embodiment 1, as attached Figure 5 As shown, a GF-2 optical image data is used as the data source for the data of the study area. The samples and sample labels in the above-mentioned embodiment 1 are used as the training data set.

[0110] Use the Python language to write a fully convolutional neural network model based on the TensorFlow framework, use the above samples and sample labels to train the model, use the trained model to test the data in the research area, extract its deep features, and visualize it.

[0111] Use the Band Math function of ENVI5.3 software to extract the normalized vegetation index and normalized water index of GF-2 image data according to the formula (3) and formula (4); the Band Math function is the pixel value of each pixel Perform numerical operations.

[0112] Use the Co-occurrence Measures function of ENVI5.3 software to extract the variance, entropy, dissimilarity and angular second-order moment of four t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a remote sensing image semantic segmentation method combining deep learning and a random forest, and the method comprises the following steps: making a training data set of aresearch region, and employing samples and sample labels as the training data set; establishing a full convolutional neural network model, and training the model by using the sample and the sample label; extracting deep features of the research area by using a full convolutional neural network model; meanwhile, extracting shallow features of the GF-2 image of the research area; performing multi-feature combination on the deep features and the shallow features; performing semantic segmentation by adopting a random forest. According to the method, through the manufacturing of the data set, the used image wavebands are few, easy to obtain, high in universality and high in segmentation precision; the deep learning and random forests are combined, shallow features and deep features are innovatively fused in the method, the shallow features and the deep features are combined and complemented with each other, and the defects of a single method are overcome; the method has a good effect on remote sensing image semantic segmentation, the classification precision is effectively improved, and the method has a good effect on extraction of water bodies, vegetation and impermeable layers.

Description

technical field [0001] The invention relates to the technical field of remote sensing image classification, in particular to a remote sensing image semantic segmentation method combined with deep learning and random forest. Background technique [0002] Ground object information has always been extremely important information in remote sensing images. At present, the resolution of remote sensing images has been greatly improved. High-resolution remote sensing images have rich and fine ground feature information, and the details of ground features are clearer, which provides a good research basis for the extraction of ground feature information in remote sensing images. But at the same time, the structure of ground objects is more complicated, and the interference information is more difficult to deal with. [0003] Semantic segmentation of remote sensing images is the pixel-level classification of images, and is an important research direction in the application of remote s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/13G06N3/045G06F18/24323G06F18/253
Inventor 张佳鑫高博宫辉力陈蓓蓓朱琳刘园园李庆端王静
Owner CAPITAL NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products