Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-modal fusion saliency detection method based on convolution block attention module

A detection method and multi-modal technology, applied in neural learning methods, character and pattern recognition, biological neural network models, etc., can solve the problem of unrepresentative image features, poor saliency prediction map effect, loss of image feature information, etc. problem, to achieve the effect of improving the efficiency of saliency detection, improving the efficiency and accuracy of detection, and improving the training results

Inactive Publication Date: 2019-12-27
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF0 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Most of the existing saliency detection methods have adopted the method of deep learning, using the method of combining the convolution layer and the pooling layer to extract image features, but the image features obtained by simply using the convolution operation and the pooling operation are not representative. characteristics, especially the pooling operation will lose the feature information of the image, which will lead to poor effect of the obtained saliency prediction map and low prediction accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal fusion saliency detection method based on convolution block attention module
  • Multi-modal fusion saliency detection method based on convolution block attention module
  • Multi-modal fusion saliency detection method based on convolution block attention module

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments.

[0040] A saliency detection method based on multimodal fusion of convolutional block attention module proposed by the present invention, its overall realization block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase;

[0041] The specific steps of the described training phase process are:

[0042] Step 1_1: Select the left viewpoint image, depth image and corresponding real human eye annotation image of N original stereoscopic images to form a training set, and record the left viewpoint image of the kth original stereoscopic image in the training set as The depth image of the original stereo image is denoted as The corresponding real human gaze is denoted as {G k (x, y)}; Since the left view image of the original stereo image, that is, the RGB color image has three channels, while the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal fusion saliency detection method based on a convolution attention module. The method comprises: in a training stage, constructing a convolutional neural network;inputting the left viewpoint image and the depth image of the original image into a convolutional neural network for training to obtain a corresponding saliency detection image; calculating a loss function between a set formed by saliency detection images generated by the model and a set formed by corresponding real human eye gaze images to obtain an optimal weight vector and an offset term of theconvolutional neural network classification training model; and in a test stage, inputting the three-dimensional image in the selected data set into the trained convolutional neural network model toobtain a saliency detection image. According to the visual saliency detection method, extraction of image features is optimized by applying a novel module, multi-scale and multi-mode feature fusion iscarried out, and finally the detection efficiency and the detection accuracy of visual saliency detection are improved.

Description

technical field [0001] The invention relates to a visual saliency detection method based on deep learning, in particular to a multimodal fusion saliency detection method based on a convolution block attention module. Background technique [0002] Recognizing salient stimuli in the visual field is an important attention mechanism in humans, that is, when looking freely, our eyes tend to pay attention to areas of the scene that have unique changes in visual stimuli, such as: bright colors, special Textural or more complex semantic aspects, this mechanism guides our eyes to salient informative, rich regions of the scene. The mechanism of human vision was first studied by neuroscientists, and its most widely used is imaging examination in the field of medical treatment. Medical imaging examination is the basis for effective follow-up diagnosis and treatment. In recent years, computer vision has also carried out research on this, and in the field of computer vision, this researc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06K9/46G06N3/04G06N3/08
CPCG06T7/11G06N3/08G06T2207/10012G06T2207/10024G06T2207/20081G06T2207/20084G06T2207/20016G06V10/462G06N3/045
Inventor 周武杰刘文宇雷景生钱亚冠王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products