Remote sensing image description method based on joint latent semantic embedding

A technology of remote sensing images and semantics, applied in the fields of instruments, computing, and electrical digital data processing, etc., can solve the problems that complex scenes cannot be effectively applied, remote sensing images cannot be fully utilized, etc., and achieve the effect of full description

Active Publication Date: 2021-06-22
XI'AN INST OF OPTICS & FINE MECHANICS - CHINESE ACAD OF SCI
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] In order to solve the problem that the existing remote sensing image description method cannot make full use of the annotations in the database and cannot be effectively applied in some complex scenes, the present invention provides a remote sensing image description method based on joint latent semantic embedding

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing image description method based on joint latent semantic embedding
  • Remote sensing image description method based on joint latent semantic embedding
  • Remote sensing image description method based on joint latent semantic embedding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] refer to figure 1 , the steps that the present invention realizes are as follows:

[0056] Step 1) Construct training sample set and test sample set:

[0057] Divide the original remote sensing images and their corresponding annotations in the database (UCM-captions, Sydney-captions or RSICD); when dividing, it is best to divide 90% of the original remote sensing images and their corresponding annotations in the database into training samples 10% of the original remote sensing images and their corresponding annotations are classified into the test sample set, and the remote sensing images to be retrieved are classified into the test sample set.

[0058] Step 2) Jointly express the original remote sensing image in the training sample set and its corresponding labeled text:

[0059] Step 2.1) Utilize the pre-trained deep neural network to extract the image features of each original remote sensing image;

[0060] Step 2.2) Use the pre-trained word vectors to extract the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

In order to solve the problem that the existing remote sensing image description method cannot make full use of the annotations in the database and cannot be effectively applied in some complex scenes, the present invention provides a remote sensing image description method based on joint latent semantic embedding. The steps include: 1) constructing a training sample set and a test sample set; 2) jointly expressing the original remote sensing image in the training sample set and its corresponding labeled text; 3) learning a latent semantic space; 4) generating a description of the remote sensing image. The present invention extracts joint text features from the five-sentence marked text describing the original remote sensing image, synthesizes information in different marked texts, and corresponds to the content contained in the original remote sensing image as completely as possible; by adding constraint expressions, the joint text feature and remote sensing The sample pair with the closest image feature distance; finally, the five-sentence annotation contained in the joint text feature is used to jointly describe the remote sensing image, so that the remote sensing image can be described more fully.

Description

technical field [0001] The invention belongs to the technical field of information processing, in particular to a remote sensing image description method, which can be used in the fields of earthquake disaster assessment, ocean monitoring and the like. The present invention is aimed at air-to-ground remote sensing images, and these remote sensing images are images of ground objects captured in a top-down shooting manner. Background technique [0002] High-resolution remote sensing images have been used in the classification and assessment of earthquake disasters, ocean observation and other fields. With the continuous development of remote sensing and related technologies, it is more and more convenient to obtain higher resolution remote sensing images. Every moment, a large number of remote sensing images are generated, and these remote sensing images consume huge manpower and material resources from generation to transmission. How to mine the information in remote sensing...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06F16/583
Inventor 卢孝强王斌强
Owner XI'AN INST OF OPTICS & FINE MECHANICS - CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products