Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image quality evaluation method based on visual saliency and deep neural network

A deep neural network and image quality evaluation technology, applied in the field of image processing, can solve problems such as the weighting of areas of interest that are not considered by the human eye

Active Publication Date: 2020-11-03
NANJING UNIV OF INFORMATION SCI & TECH
View PDF10 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the SSIM algorithm does not take into account the weighting of the area of ​​interest of the human eye, and also ignores the relevant characteristics of the HVS (Human Visual System). Therefore, the objective quality evaluation method consistent with human perception has become a research hotspot.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image quality evaluation method based on visual saliency and deep neural network
  • Image quality evaluation method based on visual saliency and deep neural network
  • Image quality evaluation method based on visual saliency and deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail:

[0058] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention

[0059] The present invention utilizes the LIVE3DIQD_phase1 database of the LIVE laboratory of the University of Texas at Austin, a total of 365 stereoscopic images of different distortion types, and performs subjective testing of image quality and significance. The image quality evaluation adopts the double-stimulus continuous quali...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image quality evaluation method based on visual saliency and a deep neural network, and the method specifically comprises the following steps: building an image saliency detection model based on the visual saliency through the color saliency and central region saliency in the visual saliency; generating a color weighted saliency map by utilizing the characteristic that human eyes specially pay attention to colors and the center of the image; utilizing a convex hull principle to obtain a region of the salient object, and generating a convex hull saliency map; fusing the color weighted saliency map and the convex hull saliency map to obtain a final saliency map, and giving an effect picture; according to the method, adopting an LIVE3DIQDphase 1 database as an imagepreprocessing library and a subsequent training library; generating a fused left-eye image and a fused right-eye image: fusing the left image and the right parallax compensation image by taking the left view as a reference to synthesize a single-eye image; generating a visual saliency map for the three-dimensional distorted image, and fusing the generated monocular image and the saliency map thereof; and combining convolution with the neural network to obtain a convolutional neural network.

Description

technical field [0001] The invention belongs to the field of image processing, in particular to the objective evaluation of the quality of stereoscopically distorted images, and relates to an objective image quality evaluation method using a saliency map and a composite image of a stereoscopic image. Background technique [0002] In recent years, the vigorous development of virtual reality (Virtual Reality, VR) technology has brought consumers a more realistic visual experience. As an important part of VR technology, stereoscopic image technology plays an extremely important role in the further development of VR technology, while distortion restricts the progress of stereoscopic image technology. [0003] The problem of distortion of stereoscopic images has always been a research hotspot at home and abroad. Many researchers have put a lot of effort into researching the distortion of stereoscopic images in order to grasp the detailed causes of distortion so as to correct the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06K9/46G06N3/04G06N3/08G06T7/90
CPCG06T7/0004G06T7/90G06N3/08G06T2207/30168G06T2207/10012G06V10/462G06N3/045
Inventor 张闯李子钰徐盼娟朱月凯
Owner NANJING UNIV OF INFORMATION SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products