A visual attention detection method for 3D video

A technology of visual attention and 3D video, applied in image data processing, instrumentation, computing, etc.

Active Publication Date: 2018-02-16
方玉明
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Most of the stereo vision attention models introduced above are currently only for stereo images, but the visual attention model still has limitations in the research of 3D stereo video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A visual attention detection method for 3D video
  • A visual attention detection method for 3D video
  • A visual attention detection method for 3D video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0123] The technical solutions of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0124] The process of the present invention is as figure 1 As shown, the specific process is as follows.

[0125] Step 1: Extract low-level visual features in the 3D video frame to calculate the feature contrast, and use the Gaussian model of Euclidean distance to obtain the spatial saliency map of the 3D video frame;

[0126] First divide the video frame into 8*8 image blocks, let r, g, b represent the red, green and blue channels of the image, define the new features of the image block, the new red feature R=r-(g+b) , new green feature G=g-(r+b) / 2, new blue feature B=b-(r+g) / 2, new yellow feature According to the above definition, we can calculate the following features of the image block:

[0127] (1) Brightness component I:

[0128] I=(r+g+b) / 3 (1)

[0129] (2) The first color component C b :

[0130] C b = B-Y (2)

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a visual attention detection method for a three-dimensional video. The method is characterized in that firstly a characteristic contrast is calculated by utilizing a low-level visual characteristic so as to obtain spatial saliency of a three-dimensional video frame; and in addition, time saliency is obtained by utilizing motion information, and motion saliency is calculated by using a motion in plane and a motion in depth for the motion information in the three-dimensional video. Finally a saliency map is obtained by the spatial saliency and the time saliency, and a common fate law and a tightness law in Getalf psychology are applied in a combination process. An experimental result shows that a good effect is achieved in saliency prediction of the three-dimensional video.

Description

technical field [0001] The invention designs a visual attention detection method for detecting the salience of a three-dimensional video. The invention belongs to the field of multimedia technology, specifically to the field of digital image and digital video processing technology. Background technique [0002] Visual attention is an important mechanism in visual perception, which can quickly detect salient information in natural images. When we observe natural images, selective attention allows us to focus on some specific salient information and ignore other unimportant information due to limited processing resources. Basically, visual attention methods can be divided into two types: bottom-up and top-down. Bottom-up processing is data-driven and task-independent for automatic salient region detection, while top-down approaches are cognitive processes involving some specific tasks. [0003] In general, salient regions extracted from visual attention models can be widely...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/285G06T7/40
CPCG06T2207/10016G06T2207/30241
Inventor 方玉明张驰诸汉炜温文媖
Owner 方玉明
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products