Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN)

A color correction, stereoscopic video technology, applied in biological neural network models, color signal processing circuits, etc., can solve the problems of not considering the influence of correction information, complex calculation process, limited application range, etc. Simple process and wide-ranging effects

Inactive Publication Date: 2012-07-11
COMMUNICATION UNIVERSITY OF CHINA
View PDF2 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0012] The above-mentioned multiple stereoscopic video image color correction methods currently exist have their own limitations, or the application range is limited, or the calculation process is complicated, and most of them do not take into account the correction of the occlusion in the overlapping area of ​​​​the corrected image and the reference image. influence of information
[0013] In the process of realizing the present invention, the inventor found that there are at least defects such as small application range, complicated calculation process and poor accuracy in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN)
  • Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN)
  • Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] According to an embodiment of the present invention, a stereoscopic video color correction method based on SIFT features and GRNN network is provided. Such as figure 1 As shown, the stereoscopic video color correction method based on SIFT feature and GRNN network of the present embodiment includes:

[0042] Step 100: Using the SIFT feature matching extraction method, respectively extract the SIFT feature points between the viewpoint image to be corrected and the reference image, calculate matching feature point pairs, and establish a color correspondence between the viewpoint image to be corrected and the reference image The pixel point pair of the relationship, get the color value of the matching feature point pair;

[0043] In step 100, the SIFT feature is a computer vision algorithm used to detect and describe the local features in the image. It looks for extreme points in the spatial scale, and extracts its position, scale, and rotation invariants. The algorithm w...

Embodiment 2

[0049] Such as figure 2 As shown, the stereoscopic video color correction method based on SIFT feature and GRNN network of the present embodiment includes:

[0050] Step 201: Select the RGB color space, and perform color conversion on the viewpoint image to be corrected;

[0051] Step 202: Using the SIFT feature matching extraction method, respectively extract the SIFT feature points between the viewpoint image to be corrected and the reference image after geometric correction, calculate the pair of matching feature points, and establish a relationship between the viewpoint image to be corrected and the reference image. The pixel point pair of the color correspondence between them, and obtain the color value of the matching feature point pair;

[0052] Step 203: using the GRNN network, combined with the color values ​​of the matching feature point pairs, to construct a GRNN neural network for reflecting the mapping relationship between the viewpoint image to be corrected and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN), which comprises utilizing an SIFT characteristic matching extraction method to respectively extract SIFT characteristic points between viewpoint images to be calibrated and reference images, calculating matching characteristic point pairs, and obtaining color values of the characteristic point pairs; using GRNN networks and combining the color values of the matching characteristic point pairs to construct GRNN neural networks reflecting mapping relationships between the viewpoint images to be calibrated and the reference images; and inputting images to be calibrated into the constructed GRNN neutral networks to be calibrated in color and outputting images having been calibrated in color. The three-dimensional video color calibration method based on SIFT characteristics and GRNN has the advantages of being wide in application scope, simple in calculation process and high in accuracy and stability.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a stereoscopic video color correction method based on scale-invariant feature transform (SIFT for short) features and generalized regression neural network (GRNN, Generalized Regression Neural Network for short). Background technique [0002] In the field of image processing technology, when collecting stereoscopic video, cameras at various viewpoints often have local or overall color differences, and there are many reasons for the color differences. Usually we assume that the surface of the object meets the diffuse reflection condition, that is, the chromaticity of the reflected light on the object surface does not change with the change of the three-dimensional space viewpoint, but the actual situation does not fully meet the diffuse reflection condition. Therefore, when each viewpoint indirectly receives the same lighting conditions, interference will also occ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N9/64H04N9/68G06N3/02
Inventor 吕朝辉董跃张懿斌
Owner COMMUNICATION UNIVERSITY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products