Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Saliency map fusion method and system

A fusion method and saliency technology, applied in the field of saliency map fusion method and system, can solve problems such as unreasonable emphasis and unsatisfactory recall effect

Pending Publication Date: 2020-03-06
BEIJING UNION UNIVERSITY
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There are some traditional saliency map fusion methods, most of which are simple summing and averaging or simple multiplication and averaging of multiple saliency maps. The weights of the saliency detection methods are set to the same value, which is unreasonable in practice, because for a picture or even each pixel, the detection effects of various saliency detection methods are different, so the weights of each saliency detection method The value should also be set differently
At present, there are also some research methods for fusing multiple salient maps. For example, Mai et al. used conditional random field (CRF) to fuse multiple salient maps, and achieved good results, but the effect of the recall rate is not satisfactory.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Saliency map fusion method and system
  • Saliency map fusion method and system
  • Saliency map fusion method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0060] Such as figure 1 As shown, step 100 is executed to prepare the labeling data set. Such as Figure 1A As shown, in step 100, execute step 101, for the jth image in the image set D, apply M detection methods to extract the saliency map of the image. The extraction results of various methods are Denotes the saliency map extracted by the i-th method of the j-th image, 1≤i≤M. Execute step 102, calculate the M saliency map of the jth image and the reference binary value of the jth image marked as g j absolute error. random saliency map with the base binary annotation g j For comparison, calculate the saliency map S i Annotated as g relative to the base binary value j absolute error of get the absolute error vector Step 103 is executed to determine whether the absolute error calculation of each image in the data set D is completed. If the absolute error calculation of each image in the data set D has not been completed, step 101 is re-executed, and for the jth...

Embodiment 2

[0068] Such as figure 2 As shown, a saliency map fusion system includes a labeled dataset 200 and a testing module 201 .

[0069] The preparation of the labeling data set 200 includes setting the image set D and the corresponding benchmark binary labeling set G. There are M kinds of detection methods, including the following sub-steps: Step 01: For the image I in the image set D, apply M kinds of The detection method extracts the saliency map of the image, and the extraction results of various methods are S={S 1 , S 2 , S 3 ,…S i ,…S M}, S i Indicates the saliency map extracted by the i-th method, 1≤i≤M. Step 02: Set the jth image in the image set D and its corresponding reference binary value as g j , any saliency map with the base binary annotation g j For comparison, calculate the saliency map S i Annotated as g relative to the base binary value j absolute error of get the absolute error vector Step 03: Perform the operations of Step 01 and Step 02 on each ...

Embodiment 3

[0073] This embodiment discloses a saliency map fusion method.

[0074] 1. Preparation of labeled data set

[0075] There is an image set D and a corresponding benchmark binary label set G; there are M detection methods.

[0076] Step 1: For image I in D, apply M detection methods to extract the saliency map of this image. The extraction results of various methods are S i Denotes the saliency map extracted by the i-th method.

[0077] Step 2: For the jth image in the image set D and its corresponding benchmark binary value marked as g, take any saliency map with the base binary annotation g j For comparison, calculate the saliency map S i Annotated as g relative to the base binary value j absolute error of get the absolute error vector

[0078] Step 3: Perform the operation of step 2 on each image in the image set D, and store the absolute error matrix E of the image set D. The above work can obtain prior knowledge about the quality of an image extraction.

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a saliency map fusion method and system. The saliency map fusion method comprises preparation of an annotation data set, and also comprises the following steps: carrying out image appearance-based neighbor search of a test image I in an image set D; extracting an error corresponding to the nearest neighbor image from an absolute error matrix E of the image set D to form an error matrix P; calculating an average absolute error of each detection method of a K nearest neighbor set, K being the number of neighbor images; calculating the weight of each detection method according to the average absolute error; and performing salient region extraction and fusion on the test image I by applying M detection methods. According to the saliency map fusion method and system provided by the invention, image data does not need to be trained in advance, and the problem of parameter modification caused by different training sets is avoided, and the processing method is simple, and the accuracy and efficiency of saliency region detection are effectively improved.

Description

technical field [0001] The invention relates to the technical fields of computer vision and image processing, in particular to a saliency map fusion method and system. Background technique [0002] Image saliency detection aims to find out the most important part of the image. It is an important preprocessing step in the field of computer vision to reduce computational complexity. It has a wide range of applications in image compression, target recognition, image segmentation and other fields. At the same time, it is a challenging problem in computer vision. Each of these methods has its own advantages and disadvantages. Even for the same saliency detection method, the detection effect for different pictures is also very different. Therefore, it is particularly important to obtain better saliency maps by combining the results of multiple saliency detection methods. There are some traditional saliency map fusion methods, most of which are simple summing and averaging or simp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32G06K9/62
CPCG06V10/25G06F18/251
Inventor 梁晔马楠李文法
Owner BEIJING UNION UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products