Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image Visual Salient Region Detection Method Based on Deep Autoencoder Reconstruction

An autoencoder and image vision technology, applied in the field of image processing, can solve the problems of decreased accuracy of salient areas, difficulty in highlighting salient areas, etc., and achieve good universality, scalability, and high-efficiency detection results.

Active Publication Date: 2018-04-17
XIDIAN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since this method is independent when calculating the reconstruction relationship of different regions, it lacks the consideration of the competition relationship between different regions from a global perspective. Therefore, when the local and non-local center-periphery relations of the actual salient region and the background region are similar, It will be difficult to highlight the actual salient regions, which will eventually lead to a decrease in the accuracy of salient region detection in the image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image Visual Salient Region Detection Method Based on Deep Autoencoder Reconstruction
  • Image Visual Salient Region Detection Method Based on Deep Autoencoder Reconstruction
  • Image Visual Salient Region Detection Method Based on Deep Autoencoder Reconstruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] refer to figure 1 , the specific implementation steps of the present invention are as follows:

[0022] Step 1, build a center-periphery reconstruction network

[0023] refer to figure 2 , the deep reconstruction network established by the present invention mainly includes three parts of an encoding module, a decoding module and an inference layer; wherein the encoding module is composed of L layer neurons, 10 , N 0 The size of is determined by the dimension of the peripheral block s(x), N in the example scheme 0 The number of neurons in each other layer is 256, 128, 64, 32, 8; the structure of the decoding module is symmetrical to that of the encoding module; the inference layer is located above the decoding module, and the number of neurons it contains is N out Determined by the dimension of the center vector c(x) of the sampling point x, N in the example scheme out is 147; the encoding module and the decoding module together constitute an autoencoder network, an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses an image visual saliency region detection method based on deep automatic encoder reconfiguration, and mainly solves the problem that an existing image saliency detection method lacks of global information integration and relies on labeled data. The technical scheme is as follows: firstly, sampling image global information to obtain a training sample set consisting of multiple sets of central-peripheral image regions; secondly, using the set to train an automatic encoder based deep reconstruction network from a peripheral region to a central region; thirdly, using a network obtained by learning to perform error calculation of reconstruction from the peripheral region to the central region on each pixel point of an image; and finally, estimating a saliency value of each pixel point in combination with a central priori value. The image visual saliency region detection method based on deep automatic encoder reconfiguration provided by present invention is capable of obtaining a saliency detection result consistent with a region-of-interest of a human visual system, and can be used in the fields of image compression and image target detection and recognition.

Description

technical field [0001] The invention belongs to the field of image processing and relates to an image visually significant regional detection method, which can be used for image compression and image target detection and recognition. Background technique [0002] With the development of network informatization, human beings have entered an era of "big data" with large-scale data growth. As one of the important ways to obtain information, image data is one of the main components. How to effectively select the most valuable from images The information has gradually become a hot spot in the field of image processing. [0003] For the human visual system, even in the face of complex visual environments, it can accurately extract and analyze the main information of the scene. For image data, the human visual system usually allocates limited resources and capabilities to areas containing key image information, that is, salient areas; while other unconcerned areas are only process...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00
CPCG06T2207/20004G06T2207/20012
Inventor 齐飞夏辰沈冲石光明黄原成李甫张犁
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products