Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Region-based multi-scale spatial-temporal visual saliency detection method

A detection method and multi-scale technology, applied in image data processing, instrumentation, calculation, etc., can solve problems such as distinction

Inactive Publication Date: 2018-03-02
SHENZHEN WEITESHI TECH
View PDF3 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem that it is difficult to distinguish the target from the background, the purpose of the present invention is to provide a region-based multi-scale spatio-temporal visual saliency detection method, which first executes the temporal superpixel model and divides the video into spatio-temporal regions of various scale levels, Then extract the motion information at each scale level and the features of each frame and construct a feature map, combine the feature maps to generate spatially salient entities for each scale-level region, and then use an adaptive time window for each region separately Smooth saliency values, incorporate temporal consistency to form spatiotemporal saliency entities across frames, and finally generate spatiotemporal saliency maps for each frame by fusing multi-scale spatiotemporal saliency entities

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Region-based multi-scale spatial-temporal visual saliency detection method
  • Region-based multi-scale spatial-temporal visual saliency detection method
  • Region-based multi-scale spatial-temporal visual saliency detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0053] figure 1 It is a system framework diagram of a region-based multi-scale spatio-temporal visual saliency detection method of the present invention. It mainly includes multi-scale video segmentation and spatial saliency entity construction.

[0054] The multi-scale spatio-temporal visual saliency detection method detects salient regions in videos by combining static features with dynamic features, where features are detected from regions; the method first implements a temporal superpixel model to segment the video into various scale-level Spatio-temporal regions; extract motion information at each scale level and features of each frame; construct feature maps from these features,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a region-based multi-scale spatial-temporal visual saliency detection method that includes the following steps of: multi-scale video segmentation and spatial saliency entity construction the process of which comprises: firstly executing a time super pixel model in order to divide a video into spatial-temporal regions in various scale levels; extracting the motion informationin each scale level and the characteristics of each frame, creating characteristic mapping, combing the characteristic mapping to generate a spatially salient entity for each scale-level region; thenindividually using an adaptive time window smoothness saliency value for each region, and bringing in time consistency to form a spatial-temporal saliency entity trans frame; and finally, generatinga spatial-temporal saliency map for each frame by fusing multi-scale spatial-temporal saliency entities. The method overcomes the limitation of using a fixed number of reference frames, and introduces a new metric of the adaptive time window, can maintain the time consistency between consecutive frames of each entity in the video and reduce the fluctuation of the target between frames.

Description

technical field [0001] The invention relates to the field of visual saliency detection, in particular to an area-based multi-scale spatio-temporal visual saliency detection method. Background technique [0002] Today, with the rapid development of Internet communication technology and multimedia processing technology, digital images and food have gradually become the main carriers of information. In the face of massive images and videos, image processing technology that matches the data growth has become an urgent need. In the image preprocessing stage, human participation can be reduced or even avoided through visual saliency detection, and tasks such as image segmentation, scene classification, and object recognition can be realized automatically or adaptively. The visual saliency mechanism ensures the high efficiency of human eyes in processing visual information. At present, the research on visual saliency detection has become a popular direction in the field of machin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/215G06T7/246
CPCG06T7/215G06T7/246
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products