Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-mode-characteristic-fusion-based remote-sensing image classification method

A remote sensing image and feature fusion technology, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve the problems of lack of information in depth features, inability to fully express image information, etc. Effect

Active Publication Date: 2016-04-20
THE PLA INFORMATION ENG UNIV
View PDF4 Cites 72 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the classification accuracy of deep features is higher than that of shallow features, deep features lack the information contained in shallow features, and the classification results cannot fully express the image information.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-mode-characteristic-fusion-based remote-sensing image classification method
  • Multi-mode-characteristic-fusion-based remote-sensing image classification method
  • Multi-mode-characteristic-fusion-based remote-sensing image classification method

Examples

Experimental program
Comparison scheme
Effect test

experiment example

[0080] The following is an example of 300 high-resolution remote sensing images collected from Google Maps with a resolution of 60 cm, where the image size is 600×600 pixels. The selected images have a total of eight semantic classes: Urban Intensive Residential Area (UIR), Urban Sparse Residential Area (USR), Rural Residential Area (PR), River (RV), Farm Land (FL), Waste Land (WL) forest (FR) and mountain (MT), such as Figure 4 shown. In this experimental example, six types of classification tasks are used to evaluate the classification performance of the classification method of the present invention. The six categories of objects include buildings, roads, expendable land, farm land, forests, and rivers. When training the neural network model, 400 images are provided for each analog image set, which are images ranging in size from 80×80 to 200×200 pixels extracted from 300 satellites.

[0081] The architecture of the convolutional neural network used in this experiment e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention, which belongs to the technical field of remote-sensing image classification, relates to a multi-mode-characteristic-fusion-based remote-sensing image classification method. Characteristics of at least two modes are extracted; the obtained characteristics of the modes are inputted into an RBM model to carry out fusion to obtain a combined expression of characteristics of the modes; and according to the combined expression, type estimation is carried out on each super-pixel area, thereby realizing remote-sensing image classification. According to the invention, the characteristics, including a superficial-layer mode characteristic and a deep-layer mode characteristic, of various modes are combined by the RBM model to obtain the corresponding combined expression, wherein the combined expression not only includes the layer expression of the remote-sensing image deep-layer mode characteristic but also includes external visible similarity of the superficial-layer mode characteristic. Therefore, the distinguishing capability is high and the classification precision of remote-sensing images is improved.

Description

technical field [0001] The invention relates to a remote sensing image classification method based on multimodal feature fusion, and belongs to the technical field of remote sensing image classification. Background technique [0002] The advancement of remote sensing image technology has resulted in explosive growth of geospatial information in terms of quantity and quality. It is very necessary to study how to automatically analyze and understand image content and obtain valuable information, and the most basic work is the classification of remote sensing images. Current high-resolution remote sensing imagery has rich visual information to describe the Earth's surface, and the use of these images allows us to identify image categories such as buildings, roads, farms, forests, rivers, and so on. Environmental and socioeconomic research must be based on remote sensing image classification results, so many scholars have studied different image features and classification techn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06K9/62G06N3/02
CPCG06N3/02G06V10/50G06V10/56G06V10/462G06F18/24317G06F18/253
Inventor 李科李钦游雄
Owner THE PLA INFORMATION ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products