Asymmetric GM multi-modal fusion saliency detection method and system based on CWAM

A detection method and detection system technology, applied in the field of visual saliency detection of deep learning, can solve the problems of low accuracy, poor effect of saliency prediction map, loss of image feature information, etc., and achieve the effect of enhanced expression

Active Publication Date: 2020-10-13
HAINAN UNIVERSITY
View PDF13 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing saliency detection methods have adopted the method of deep learning, using the method of combining the convolution layer and the pooling layer to extract image features, but the image features obtained by simply using the convolution operation and the pooling operation are not representative. characteristics, especially the pooling operation will lose the feature information of the image, which will lead to poor effect of the obtained saliency prediction map and low prediction accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Asymmetric GM multi-modal fusion saliency detection method and system based on CWAM
  • Asymmetric GM multi-modal fusion saliency detection method and system based on CWAM
  • Asymmetric GM multi-modal fusion saliency detection method and system based on CWAM

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] The saliency detection of the image is to simplify the original image into the salient area in the image and mark it out, which provides accurate positioning for subsequent editing processes such as image segmentation, recognition, and scaling. Capture and other fields have a wide range of application prospects. In recent years, with the rise of big data and deep learning technologies, convolutional neural networks (CNN) have shown very superior performance in the detection of salient objects in images. Through the classification of convolutional neural networks and Regression to achieve better positioning and capture of the boundary information of image salient objects.

[0038] refer to figure 1 ~ Fig. 5 is the first embodiment of the present invention, which provides a CWAM-based asymmetric GM multimodal fusion saliency detection method, including:

[0039] S1: Collect image data for preprocessing to form a sample data set. What needs to be explained is:

[0040] ...

Embodiment 2

[0066] refer to Figure 7 , which is the second embodiment of the present invention. This embodiment is different from the first embodiment in that it provides a CWAM-based asymmetric GM multimodal fusion saliency detection system, including:

[0067] The acquisition module 100 is used to acquire the RGB image, the depth image and the real human eye annotation image of the original three-dimensional graphics, and construct a sample data set.

[0068]The data processing center module 200 is used to receive, calculate, store, and output the weight vector and offset item to be processed, which includes a computing unit 201, a database 202 and an input and output management unit 203, and the computing unit 201 is connected to the acquisition module 100 , used to receive the image data acquired by the acquisition module 100, perform preprocessing and weight calculation on it, the database 202 is connected to each module, and is used to store all the data information received, and p...

Embodiment 3

[0075] In order to better understand the application of the method of the present invention, this embodiment chooses to describe the combined operation of the detection method and system, refer to Image 6 ,as follows:

[0076] (1) Convolutional neural network includes input layer, hidden layer and output layer.

[0077] The input end of the input layer inputs the RGB image of the original stereo image and the corresponding depth map, the output end of the input layer outputs the R channel component, the G channel component and the B channel component of the original input image, and the output of the input layer is the input of the hidden layer Amount; Among them, the depth map is processed with RGB after being processed by HHA encoding Figure 1 The sample has three channels, that is, it is processed into three components after passing through the input layer, and the input original stereo image has a width of W and a height of H;

[0078] The components of the hidden laye...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an asymmetric GM multi-modal fusion saliency detection method and system based on CWAM, and the method comprises the steps: collecting image data for preprocessing, and forminga sample data set; constructing a convolutional neural network model based on a deep learning strategy, and inputting the sample data set for training to obtain a saliency detection graph; forming aset by the trained saliency detection images, and calculating a loss function value between the set and the corresponding real human eye annotation image set to obtain an optimal weight vector and anoptimal bias term; and inputting a to-be-detected image into the trained convolutional neural network model, and performing prediction judgment by using the optimal weight vector and the optimal biasterm to obtain a saliency detection image of the image. According to the method, multi-scale and multi-level rich image information of the depth map and the RGB image can be effectively utilized, andthe problem of dissolution when high-level features are transmitted to a low level is effectively solved; and after a channel attention module is added, the expression of the salient region is enhanced.

Description

technical field [0001] The invention relates to the technical field of visual saliency detection of deep learning, in particular to a CWAM-based asymmetric GM multimodal fusion saliency detection method and system. Background technique [0002] When looking for objects of interest in images, humans can automatically capture semantic information between objects and their contexts, pay high attention to salient objects, and selectively suppress unimportant factors. This precise mechanism of visual attention has been explained in various biological logic models. The goal of saliency detection is to automatically detect the most informative and attractive parts of an image. In many image applications, such as image quality assessment, semantic segmentation, image recognition, etc., identifying salient objects can not only reduce the computational cost, but also improve the performance of saliency models. Early saliency detection methods used manual features, that is, mainly fo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06K9/62G06N3/04G06N3/08
CPCG06T7/0002G06N3/08G06T2207/10024G06T2207/10028G06T2207/20081G06T2207/20221G06N3/045G06F18/25G06F18/241G06F18/214
Inventor 靳婷张欣悦
Owner HAINAN UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products