Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for modeling background based on camera response function in automatic gain scene

A technology of automatic gain and camera response, applied in the field of background modeling, can solve the problems of fully automatic implementation of unfavorable background difference method, inability to judge whether the rapid change of gray value of large-scale pixels is foreground or large error, etc.

Inactive Publication Date: 2012-11-14
NANJING HUICHUAN IND VISUAL TECH DEV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0012] The problem to be solved by the present invention is: in the motion detection of the background difference method, the existing background modeling method cannot judge whether the rapid change of the gray value of the large-scale pixel is the foreground or the background after automatic gain, there is a large error, and training data needs to be collected in advance and other defects, which are not conducive to the fully automatic realization of background difference method motion detection, and are easily affected by noise, etc.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for modeling background based on camera response function in automatic gain scene
  • Method for modeling background based on camera response function in automatic gain scene
  • Method for modeling background based on camera response function in automatic gain scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0097] The present invention will be described in detail below with reference to the accompanying drawings, and the described embodiments are intended to facilitate understanding of the present invention.

[0098] figure 1 is a flowchart of the camera-based automatic gain background modeling method. According to flow sequence, the specific implementation process of each step of the inventive method is as follows:

[0099] 1. Obtain image sequence

[0100] The system first acquires image sequences, and inputs the sequences to two parallel modules: a background difference method module and an automatic gain background modeling method module, and the automatic gain background modeling method module implements the method of the present invention.

[0101] 2. Determine whether the camera response function is recovered. If not, construct the critical false detection objective function T and set the automatic gain critical false detection threshold t. The automatic gain critical fa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for modeling a background based on a camera response function in an automatic gain scene, which comprises the following steps of: performing automatic gain progressiveness-based analysis to obtain a roughly divided background area, obtaining low-noise training data by using a joint histogram method, and performing recovery once to obtain a globally optimal camera response function by the method based on maximum likelihood estimation and parameter constraints; online calculating a gain ratio frame by frame by utilizing correlation between a foreground and background difference and the gain ratio and the homography of a grayscale difference function relative to the gain ratio; and if the gain ratio is not 1, performing updating to obtain a background reference frame the same as a current reference frame by using the camera response function and the gain ratio, otherwise determining the background reference frame is unchanged, and obtaining the backgroundreference frame with a gain coefficient the same as that of the current frame along with the change of the gain coefficient of a camera. By the method, the shortcomings of high background change speed, caused by difficulties in realizing automatic gain along with the camera, of the conventional methods are overcome, thereby ensuring high-efficiency motion detection.

Description

technical field [0001] The invention relates to the fields of image processing and computer vision, in particular to a background modeling method for accurately following changes in gain coefficients in camera automatic gain scenes to obtain precise motion detection. Background technique [0002] Motion detection is an important research direction of computer vision, and it is also a key and basic module in many computer vision applications, such as video semantic annotation, pattern recognition, traffic video surveillance, and human body tracking. The purpose of motion detection is to completely segment the moving object of interest from the video. Whether the segmentation is accurate or not directly affects the accuracy of subsequent modules. [0003] Motion detection methods can be classified into the following categories [1] : Optical flow method, frame difference method, background difference method. In the fixed camera scene, the background subtraction method has be...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/20G06T5/00H04N5/232
Inventor 江登表李勃董蓉刘晓男胥欣陈启美何军
Owner NANJING HUICHUAN IND VISUAL TECH DEV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products