Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Bottom-up visual saliency generating method fusing local-global contrast ratio

A technology of global contrast and local contrast, applied in computer parts, image data processing, instruments, etc., can solve the problem of not being able to highlight well, only considering the global contrast or local contrast of the image, etc.

Inactive Publication Date: 2014-08-20
NORTHWESTERN POLYTECHNICAL UNIV
View PDF2 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Although the above-mentioned saliency calculation models can obtain satisfactory results in a specific sample library, there is still an obvious defect in these models: they only consider a point in the global contrast or local contrast of the image, and do not Simultaneously apply these two kinds of comparative information to construct a unified saliency calculation model
Experiments show that salient regions based on local feature comparison are easy to concentrate on edge parts with strong changes or complex background regions; while salient regions based on global feature contrast cannot highlight regions with strong contrast with the surrounding areas.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Bottom-up visual saliency generating method fusing local-global contrast ratio
  • Bottom-up visual saliency generating method fusing local-global contrast ratio
  • Bottom-up visual saliency generating method fusing local-global contrast ratio

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

[0024] The hardware environment used for implementation is: Intel Pentium2.93GHz CPU computer, 2.0GB memory, the software environment of operation is: Matlab R2011b and Windows XP. All the images in the BRUCE database are selected as test data in the experiment. The database contains 120 natural images, and it is an internationally open database for testing visual saliency calculation models.

[0025] The present invention is specifically implemented as follows:

[0026] 1. Extract the tiles and their features in the image: first downsample the image to N×N pixels, and then use the size ∈ [5,50] with a step size of A square sliding window of , extracts the patch p in the downsampled image i , tile p i The vector formed by the pixel values ​​in will be used as the feature x of the block i ; where i∈[1,M], M is the number of tiles in an image.

[0027] 2. Con...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a bottom-up visual saliency generating method fusing the local-global contrast ratio. According to the method, firstly, the local contrast ratio between a certain image block in an image and the other image blocks in a neighbor domain and the global contrast ratio between the image block and the remaining image blocks in the image are calculated based on the sparse coding theory, then, the two kinds of comparative information are organically combined together, the center offset is added into the two kinds of comparative information, finally, fusion of the local contrast ratio and the global contrast ratio is achieved, and a visual saliency calculation model with better accuracy and visual is built.

Description

technical field [0001] The invention belongs to the field of computer vision algorithm research, and relates to a bottom-up visual saliency generation method that integrates local-global contrast, and can accurately and robustly calculate a saliency map of a given image in a natural image database. Background technique [0002] Visual salience is an important function of visual attention, which means that the observer selects an important content from a complex visual scene to focus on, while ignoring other less important content. Some content in a visual scene grabs the observer's attention more than others, and we say they have higher visual salience. The idea of ​​visual saliency has been widely used in the computational model of visual attention. The saliency measurement method adopted by ITTI in its classic visual attention computational model is based on the difference of local visual features between pixels and their surrounding neighborhoods; Ma et al. In 2003, a sa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/46G06T7/00
Inventor 韩军伟张鼎文郭雷
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products