Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-camera cooperative character tracking method in complex scene

A complex scene and camera technology, applied in image data processing, instruments, closed-circuit television systems, etc., can solve undiscovered problems, improve effectiveness and reliability, and ensure effective monitoring

Inactive Publication Date: 2012-09-12
XIDIAN UNIV
View PDF2 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In view of the problems existing in the existing research and application, the present invention proposes a multi-camera cooperative target tracking method that can adapt to environmental changes, and the project team of the present invention searched domestic and foreign patent documents and published journal papers, and has not found any similar Reports or documents closely related to the present invention

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-camera cooperative character tracking method in complex scene
  • Multi-camera cooperative character tracking method in complex scene
  • Multi-camera cooperative character tracking method in complex scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] see figure 1 It is a schematic diagram of the operation flow of the present invention, according to figure 1 Flow process, detection method of the present invention comprises the following steps:

[0037] 1. Moving target monitoring: analyze the surveillance video obtained by each camera, convert the video into a corresponding image sequence, use an improved time difference method combined with the background subtraction method to establish a background model, and adopt different update strategies for different regions . First, the binary image is obtained by the time difference method to find the approximate motion area, and the background is updated according to formula (1). The background model of the foreground pixel in the detection result is not updated, but the relatively stable background pixel is updated, and can be Different update speeds are adopted according to whether there is a moving target in the detection result, where B t (x, y) and B t-1 (x, y) ...

Embodiment 2

[0043] Traditional monitors are fixed at the same place. Due to restrictions such as day and night changes or ambient light and darkness, the surveillance videos shot at different time points in the same space are not the same. The video images shot during the daytime have sufficient light sources, and their field of view is large and moving. The target is obvious and easy to see; on the contrary, night images and cloudy conditions appear blurred due to the uneven distribution of light sources in the scene, so that it is difficult to easily track the target. In order to make the system adaptable to different lighting, different climates and complex backgrounds Robust object detection and tracking methods under conditions. Based on the fuzzy theory, the brightness value of the scene image is converted into a fuzzy matrix, and the real-time fuzzy enhancement is performed on this matrix to enhance the dark part of the image, and retain the original saturation information of each p...

Embodiment 3

[0068] In addition, the present invention also uses image connected region marks to identify different objects, that is, each object is independent of each other and does not intersect each other. If there is an intersection, it is regarded as the same object. The binary image is processed, because the binary image has only two colors of black and white, so its connected components are generally in units of a black or white area, when each black (white) pixel in the given image , judge whether there is (white) black in the adjacent pixels, assuming that each object in the image is white, scan each pixel in each row from top to bottom in sequence, if the point is a white point, then judge the left, top, and For the four pixels adjacent to the upper left and upper right, the order of judging according to the number of white pixels is as follows:

[0069] 1.0: Given a new label and consider the point to belong to a new object.

[0070] 2.1: Follow the label of the point and cons...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-camera cooperative character tracking method in a complex scene. The method comprises the following steps: A1. Monitoring a moving object; A2. Enhancing a scene image: converting a brightness value of the scene image into a fuzzy matrix based on a fuzzy theory, carrying out real-time fuzzy reinforcement aiming at the matrix so as to enhance a shade part in the image, keeping multiple original saturation information of each pixel point so as to ensure effective monitoring of the moving object; A3. Carrying out multi-camera designated object tracking. By using the method of the invention, modeling is simple; an algorithm is simplified; detection is accurate; an emergency in each monitoring area can be effectively tracked; an autonomous analysis performance and an intelligent monitoring capability of the system can be increased; robust object detection and the tracking method which are suitable for different illumination, different climates and a complex background are searched; a safety monitoring level of a public place can be improved.

Description

technical field [0001] The invention belongs to the fields of computer vision and intelligent information processing. It involves computer monitoring technology based on moving images and pattern recognition technology based on statistical learning. The invention mainly relates to an intelligent analysis method of video surveillance content, and in particular is a multi-camera coordinated target tracking method in complex scenes. Background technique [0002] In recent years, with the rapid development of urban informatization, community informatization, and industry informatization, the application field of video surveillance is also gradually expanding, and the video surveillance system has gradually covered various occasions. As the basis of traffic system violation detection and the important pillar of safety detection in various occasions, the research and application value of video surveillance is increasing day by day, and it has become the research object of many sc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00H04N7/18
Inventor 王韦桦刘志镜屈鉴铭贺文骅唐国良赵俊敏熊静侯晓慧王静袁通刘慧王纵虎陈东辉姚勇
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products