Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Graph-Based Saliency Fusion Method for 3D Gaze Prediction

A fusion method and fixation point technology, applied in the field of image processing and computer vision, can solve the problem of inconsistent saliency prediction of different modal features, and achieve the effect of speeding up the calculation speed, optimizing the edge, and reducing the mutation of the salient value.

Active Publication Date: 2021-08-20
HUAZHONG UNIV OF SCI & TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] In view of the above defects or improvement needs of the prior art, the present invention provides a saliency fusion method based on graph-based 3D gaze point prediction, thereby solving the problem of different modal feature predictions in the process of multi-modal feature fusion in the prior art Significantly inconsistent or even contradictory technical issues

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Graph-Based Saliency Fusion Method for 3D Gaze Prediction
  • A Graph-Based Saliency Fusion Method for 3D Gaze Prediction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0029] A saliency fusion method for graph-based 3D gaze point prediction, including saliency map generation and graph-based fusion,

[0030] The generation of the saliency map includes obtaining the saliency map of each frame of the original picture from the original video sequence; the saliency map includes: 2D static saliency map, motion saliency map, depth saliency map and high-level semantic...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a saliency fusion method for 3D gaze point prediction based on a graph, which includes generation of a saliency map and fusion based on a graph. The generation of the saliency map includes obtaining a saliency map of each frame of an original picture from an original video sequence; The graph-based fusion includes: taking the minimum saliency smoothing constraint between each superpixel in the original picture and its adjacent superpixels, and the minimum saliency difference between the original picture and its adjacent original pictures as the goal , combined with the saliency map to construct the energy function of the original image; solve the energy function in the original image to obtain the target saliency map. The present invention considers the saliency smoothing constraints between superpixels and their adjacent superpixels, as well as the saliency difference between the original picture and its adjacent original pictures, so that the saliency fusion method of the present invention can be used in the process of multimodal feature fusion The different modal features in the predictive saliency are better.

Description

technical field [0001] The invention belongs to the fields of image processing and computer vision, and more particularly relates to a saliency fusion method for 3D gaze point prediction based on graphs. Background technique [0002] In the field of visual attention, there are already quite a few models for 2D visual attention, which can be roughly divided into two categories: human gaze point prediction models and salient object detection models. The former computes salient intensity maps at the pixel scale, and the latter aims to detect and segment salient objects or regions in a scene. There are quite a lot of visual attention models for human eye gaze prediction, but the research on gaze prediction models for 3D videos has just begun in recent years. In a nutshell, the framework of most 3D gaze prediction models is extended from 2D gaze prediction models. The framework mainly includes two steps. The first step is to extract a series of feature maps from the original co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T5/50G06K9/46
CPCG06T5/50G06V10/462
Inventor 刘琼李贝杨铀喻莉
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products