Multi-sensor video fusion method based on space-time conspicuousness detection

A video fusion and multi-sensor technology, applied in the field of image processing, can solve the problems of spatial information extraction performance degradation, achieve the effects of spatial information extraction and spatio-temporal consistency improvement, high robustness, and overcome the effect of being easily affected by noise

Inactive Publication Date: 2013-05-08
XIDIAN UNIV
View PDF9 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantage of this method is that because this method only uses a single frame image processing method to adopt d...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-sensor video fusion method based on space-time conspicuousness detection
  • Multi-sensor video fusion method based on space-time conspicuousness detection
  • Multi-sensor video fusion method based on space-time conspicuousness detection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will be further described below in conjunction with the accompanying drawings.

[0029] Refer to attached figure 1 , the concrete steps of the present invention are as follows:

[0030] Step 1, input two videos that have been strictly calibrated in space and time, respectively.

[0031] Step 2, get the subband coefficients:

[0032] The three-dimensional uniform discrete curvelet 3D-UDCT decomposition is performed on the two videos respectively, and the respective band-pass direction sub-band coefficients and low-pass sub-band coefficients are obtained.

[0033] Step 3, divide the video area into three areas:

[0034] Use the three-dimensional spatio-temporal structure tensor to detect the spatio-temporal saliency of the band-pass direction sub-band coefficients of each video, and divide the video area into three areas: moving target area, spatial geometric feature area and smooth area; the implementation steps are as follows:

[0035] Construc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-sensor video fusion method based on space-time conspicuousness detection. The method comprises the steps of respectively inputting two registered videos; utilizing three dimensional uniform discrete curvelet (3D-UDCT) to decompose so as to obtain sub-band coefficients; dividing a video area into three different areas; combining different areas according to different fusion strategies so as obtain high-pass direction sub-band coefficients of a fused video; conducting weighted average on low-pass sub-band coefficients so as to obtain low-pass sub-band coefficients of the fused video; and conducting 3D-UDCT inverse transformation so as to obtain the fused video. The method overcomes the defects of information extraction limited by space and space-time uniformity in the prior art, is capable of better extracting conspicuous space-time characteristic information input in video images so as to enable the video to be better in space-time uniformity and stability, is good in noise robustness, and can be used in video image fusion under a static background.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a multi-sensor video fusion method based on spatiotemporal saliency detection in the technical field of video image processing. The invention can more accurately extract significant spatio-temporal feature information from the input video, and can be applied to video image fusion of multi-sensor static backgrounds. Background technique [0002] Image and video fusion is a special field of information and data fusion. Through image or video fusion, the "redundant" and "complementary" information between the original images or videos is extracted to obtain a fused image or video. The fused image or video can describe the scene more accurately than a single input image or video. The basic requirement of static image fusion is that the useful spatial information in the input image should be retained in the fused image as much as possible, and no false information shou...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N5/262H04N5/265
Inventor 张强陈月玲陈闵利王龙
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products