Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Movement target detection and extraction method based on movement attention fusion model

A technology that combines models and moving objects, and is applied in image data processing, instruments, calculations, etc., to achieve the effect of reducing interference and suppressing noise

Inactive Publication Date: 2014-01-15
XIAN UNIV OF TECH
View PDF3 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Junwei Han et al. use the attention model to segment video objects. The method is to perform global motion compensation first, and then fuse dynamic attention and static attention to get the final result, but this method is limited to local motion scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Movement target detection and extraction method based on movement attention fusion model
  • Movement target detection and extraction method based on movement attention fusion model
  • Movement target detection and extraction method based on movement attention fusion model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0048] The invention provides a moving target detection and extraction method based on a motion attention fusion model. According to the motion contrast of the target in time and space, a motion attention fusion model is constructed by utilizing the change characteristics of motion vectors in time and space , combined with noise removal, median filtering, and edge detection, to achieve accurate extraction of moving objects in global moving scenes.

[0049] The method proceeds as follows,

[0050] Step 1. After estimating the motion vector field according to the optical flow equation, perform two preprocessing steps: superposition and filtering:

[0051] Let the image be at pixel r=(x,y) T At , the intensity at time t is recorded as I(r,t), through the optical flow equation v · ▿ I ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a movement target detection and extraction method based on a movement attention fusion model. The movement target detection and extraction method is characterized in that the movement attention fusion model is constructed through change characteristics of motion vectors in time and space according to movement contrasts of target movement in the time and the space, and accurate extraction on a movement target under an overall movement scene is achieved by combining noise removal, median filtering and edge detection. In order to remove limitations of target detection and extraction under the overall movement scene at present, the time-space movement attention fusion model, the noise removal technology, the image filtering technology and the edge tracking and detection technology are adopted. Testing results of multiple overall movement video scenes display that compared with other algorithms, the algorithm has the advantages that the accuracy is high, the complexity is low, the operation amount is small, and the instantaneity is high.

Description

technical field [0001] The invention belongs to the technical field of video image detection, and relates to a moving target detection and extraction method based on a motion attention fusion model. Background technique [0002] The detection and extraction of moving objects has always been a research hotspot in the field of video analysis, and has a wide range of application values. Moving object detection and extraction can be roughly divided into two categories: one is when the camera is still, that is, a local motion scene, and the other is when the camera is moving, that is, a global motion scene. The moving object detection method in the local motion scene is relatively mature, but in the global motion scene, due to the complexity of the motion information, the detection and extraction of the moving object has always been a difficult problem. [0003] The video moving target detection algorithm is mainly based on spatiotemporal information, namely texture information,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/20
Inventor 刘龙樊波阳
Owner XIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products