Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image semantic retrieving method based on visual attention model

A visual attention and image technology, applied in image enhancement, image data processing, special data processing applications, etc., can solve the problems of extracting objects in images, reducing retrieval accuracy, and unaccustomed query methods, etc.

Inactive Publication Date: 2010-05-12
BEIJING JIAOTONG UNIV
View PDF0 Cites 59 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The "many-to-many" matching strategy is used in the matching, that is, the integrated region matching (Integrated Region Matching, IRM) algorithm is used to measure the similarity of the image. It should be pointed out that since all image features and matching are used. The segmented area, but no selection of the area. Since most areas are not of interest to the user, the whole-area matching leads to a decrease in retrieval accuracy, which still belongs to the global retrieval mode.
[0008] Although region-based image retrieval is closer to the user's query ideas than global-based image retrieval, it also has some problems: First, because image segmentation is still a very difficult topic in the field of computer vision, the existing image segmentation technology It is impossible to guarantee the accurate extraction of objects in the image, so that the segmented area corresponds well to the semantic object; The regions of interest represent the user's query intent, while most of the remaining non-interest regions are irrelevant to the user's query intent
Therefore, the retrieval strategy based on whole-region matching not only cannot reflect the user's retrieval purpose, but also these irrelevant regions are often difficult to match correctly, resulting in a decrease in retrieval accuracy
However, manually selecting the area of ​​interest by the user will virtually increase the workload of the user, and the user is not used to this query method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image semantic retrieving method based on visual attention model
  • Image semantic retrieving method based on visual attention model
  • Image semantic retrieving method based on visual attention model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Specific embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0030] The overall framework of the algorithm of the present invention is as figure 1 shown. The generation of saliency maps using the computational model of the visual attention mechanism is the basis of the entire retrieval framework, and the specific methods for implementing each module of the framework are described below.

[0031] The Canny edge operator and the JSEG image segmentation algorithm are respectively used to extract the edge map and the segmentation map corresponding to the original input image. At the same time, the saliency map generation algorithm based on the visual attention mechanism in the present invention is used to extract the saliency map corresponding to the original image Then, the edge map and segmentation map are organically fused with the extracted saliency map, that is, the results of the segmentation map and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an image semantic retrieving method based on a visual attention mechanism model, which is driven by data completely, thus understanding the semantics of images from an angle of a user as far as possible under the condition of no need to increase the interactive burden of the user, and being close to the perception of the user so as to improve the retrieving performance. The image semantic retrieving method has the advantages that: (1) a visual attention mechanism theory in a visual cognition theory is introduced into image retrieve; (2) the method is a completely bottom-up retrieve mode, thus having no need of user burden brought by user feedback; and (3) obvious edge information and obvious regional information in images are simultaneously considered, the mode of retrieval integration is realized, and the performance of image retrieval is improved.

Description

technical field [0001] The invention relates to image recognition and retrieval technology, in particular to an image semantic retrieval method. Background technique [0002] With the rapid development of multimedia technology and Internet technology, digital images have become a widely used media. In recent years, the rapid popularization of digital cameras and mobile devices that can take pictures has made it easier to obtain digital images. The number of images that people come into contact with and need to process every day has shown geometric growth, and the scope of application has also greatly expanded. Facing such a large-scale image resource, how to effectively organize and quickly retrieve them has become an urgent problem to be solved. Because the image is different from the text, the text itself can explain the content of the speech, but the image needs the help of human subjective understanding to explain its meaning, so the retrieval of the image is much more ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06T5/00G06T5/50
Inventor 冯松鹤郎丛妍须德
Owner BEIJING JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products