Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Neural Network-Based Rotation Difference Correction Method for Multimodal Remote Sensing Images

A neural network and remote sensing image technology, applied in the field of remote sensing image processing, can solve the problems of inaccurate geographic information, unusable, lost algorithms, etc., and achieve the effect of simplifying network structure, enhancing universality, and eliminating correction errors.

Active Publication Date: 2022-06-03
TSINGHUA UNIV +1
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Scale-invariant feature transform (SIFT) and its improved algorithm can better realize the robustness of the algorithm to geometric differences between homologous images by constructing scale pyramids and marking main directions. However, due to the potential Non-linear radiation distortion and noise, the algorithm is still subject to certain limitations; in recent years, the Histogram of Oriented Phase Consistency (HOPC) algorithm and the Oriented Gradient Channel Feature (CFOG) algorithm proposed by Ye Yuanxin et al. Relatively accurate auxiliary geometric geographic information performs geometric correction on the reference image and the target image respectively, and eliminates the obvious rotation and translation differences between image pairs through initial registration, but in practice, the geographic information of the image may not be accurate enough or even unusable, Correlation algorithms will also lose their original role; Zhou Weishuo, An Bowen and others in the paper "Heterogeneous Remote Sensing Image Registration Algorithm Based on Geometric Invariance and Local Similarity Features" (Infrared Technology, 2019, 41(06): 561-571 ) discloses a method for remote sensing image matching based on geometric invariance and local similarity features. Robust Features, Accelerated Robust Features) The feature direction generation principle of the algorithm obtains the feature direction vector of the extracted rough matching point, and eliminates the rough error, and then calculates the feature direction angle difference between the two images, and then according to the corresponding angle difference, and Combining the bicubic interpolation method to perform rotation correction on the image, and finally the coarse matching points in the image to be registered are corrected and converted to point coordinates that are consistent with the image correction. The improvement of the method has realized the correction of the rotation difference of heterogeneous images to a certain extent, but the selection of the main direction in this method is mainly based on the SURF algorithm, and it is difficult to obtain a good result for various multi-modal remote sensing images with large radiation distortion. At the same time, the method has a high dependence on the initial rough matching, so it still has certain limitations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Neural Network-Based Rotation Difference Correction Method for Multimodal Remote Sensing Images
  • A Neural Network-Based Rotation Difference Correction Method for Multimodal Remote Sensing Images
  • A Neural Network-Based Rotation Difference Correction Method for Multimodal Remote Sensing Images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] S4: Based on the predicted difference angle, complete the correction of the multimodal remote sensing image rotation difference angle.

[0049]

[0052]

[0054]

[0055]

[0057]

[0058]

[0059]

[0061] [E

[0062]=[I(x, y)*L

[0064]

[0068] b=∑

[0070]

[0074]

[0075] traverse all the pixel points of the image in turn, according to the size of the direction angle of its phase consistency, take the feature value as the weight,

[0077] n

[0078] Finally, a rotation feature vector Rot with a size of 1 × 360 is obtained

[0082] The various embodiments in this specification are described in a progressive manner, and what each embodiment focuses on is that it is related to other

[0083] The foregoing description of the disclosed embodiments enables any person skilled in the art to make or use the present invention.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network-based method for correcting rotation differences of multimodal remote sensing images. It includes the following steps: S1: Obtain a group of multimodal remote sensing image pairs with the same target scene, and perform image preprocessing; S2: For each preprocessed image, calculate phase consistency eigenvalues ​​and orientation angles, and according to the phase Consistency eigenvalues ​​and orientation angles to calculate the rotation feature vector of the image; S3: use the rotation feature vectors of the two images as the input of the neural network, and calculate and output the predicted difference angle of the two images; S4: Based on the predicted difference angle, complete multiple Modal remote sensing image rotation difference angle correction. The invention effectively solves the technical problem of quickly and accurately predicting the rotation difference angle between multimodal remote sensing images in the case of only simple image data information, and gets rid of the limitation of auxiliary geographic space information.

Description

A Rotation Difference Correction Method for Multimodal Remote Sensing Image Based on Neural Network technical field The present invention relates to the technical field of remote sensing image processing, more specifically to the matching of multimodal remote sensing images Preprocessing method for coarse calibration of rotational differences. Background technique Multimodal remote sensing image matching is an important research task in the field of remote sensing image processing technology. application prospects. Achieving accurate registration between remote sensing images is helpful for combining different types of remote sensing images from different imaging sources and at different times. The intrinsic information of the data is correlated to improve the usability of joint imagery from multiple perspectives. However, geometric differences and nonlinear radiation distortion are the key difficulties that restrict the registration accuracy of remote sensing image...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/60G06T5/00G06N3/04G06N3/08
CPCG06T3/608G06T5/006G06N3/08G06T2207/10032G06N3/047G06N3/045
Inventor 黄翊航张海涛吕守业郑美吴正升
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products