Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Adaptive method and system

A multi-target tracking and cross-camera technology, applied in the field of self-adaptive cross-camera multi-target tracking method and system, can solve the problem of low accuracy of multi-target tracking, and achieve the goal of improving accuracy, robustness and strong robustness Effect

Inactive Publication Date: 2017-02-01
ZTE CORP
View PDF5 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is to provide an adaptive cross-camera multi-target tracking method and system to solve the problem of low accuracy of existing cross-camera multi-target tracking

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adaptive method and system
  • Adaptive method and system
  • Adaptive method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0058] This embodiment provides an adaptive cross-camera multi-target tracking method, which mainly includes the following operations:

[0059] First fix the size of the tracking window and use the pre-established tracking model to obtain the position of the target object in the current video frame, then change the size of the tracking window at the obtained position, use the tracking model to obtain the scale of the target object, according to the obtained target object Bead scale online update tracking model;

[0060] According to the updated tracking model, perform logarithmic polar transformation on the image of the target object in the current video frame, perform mixed Gaussian modeling on the image of the target object after logarithmic polar coordinate transformation, and measure the center offset and change degree of the target object. Determine whether the target object has disappeared.

[0061] In addition, when operating according to the above method, you can also...

example 1

[0146] This example tracks the face in a video of an indoor scene with severe occlusion. The person in the video uses a book to cover the face. This embodiment comprises the following steps:

[0147] Step 1: Initialize the input.

[0148] Input the path of the video sequence and the initial tracking rectangle to the tracking algorithm. The target in the initial rectangle in the video is the face of the person, such as figure 2 middle figure 2 as shown in a.

[0149] Step 2: Tracking model establishment.

[0150] Read the initial frame where the target is located, collect samples within the range of R=40 pixels from the target position, the size of the rectangular frame of the sample is the same as the size of the target rectangular frame, extract haar-like features for all samples, and record the rectangle of each sample at the same time box position. After obtaining the training samples, the online SVM tracking model is obtained by solving the following convex optimiza...

example 2

[0204] In this embodiment, two pedestrians in a certain section of road video are simultaneously tracked, and the illumination changes in the video are relatively large, and the resolution of the target vehicle is low. This embodiment comprises the following steps:

[0205] Step 1: Initialize the input.

[0206] The user selects a video sequence, and selects one or more targets to be tracked. The tracking targets in this video are two pedestrians, such as image 3 as shown in a.

[0207] Step 2: Tracking model establishment.

[0208] Read the initial frame where the target is located, collect samples within the range of R=40 pixels from the target position, the size of the rectangular frame of the sample is the same as the size of the target rectangular frame, extract haar-like features for all samples, and record the rectangle of each sample at the same time box position. After obtaining the training samples, the online SVM tracking model is obtained by solving the follow...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an adaptive cross-camera multi-target tracking method and system and relates to a cross-camera multi-target tracking technology in the field of computer vision. The method comprises the following steps: firstly, fixing the size of a tracking window, obtaining a position of a current video frame target object by use of a pre-established tracking model, the size of the tracking window is converted at the obtained position, the scale of the target object is obtained by use of the tracking model, and according to the obtained scale of the target object, updating the tracking model in an online mode; and according to the updated tracking model, performing log-polar transformation on an image of the current video frame target object, performing hybrid Gauss modeling on the image of the target object after the log-polar transformation, measuring a center deviation and change degree of the target object, and determining whether the target object already disappears. The invention also discloses an adaptive cross-camera multi-target tracking system. According to the technical scheme, the accuracy and the robustness of a tracking algorithm can be effectively improved.

Description

technical field [0001] The invention relates to a cross-camera multi-target tracking technology in the field of computer vision, in particular to an adaptive cross-camera multi-target tracking method and system. Background technique [0002] In intelligent video surveillance, the monitoring range of a single camera is very limited and cannot effectively cover the entire monitoring scene. In order to achieve larger-scale monitoring, such as tracking a car in a city's road network or looking for a suspicious object in a large railway station, multiple surveillance cameras usually need to work together. In the multi-camera surveillance network with unknown topological structure in non-overlapping areas, the problem of multi-target matching and tracking is one of the hotspots and difficulties. Cross-camera multi-target matching and tracking connects the same moving target in different surveillance cameras, which is the basis for follow-up work such as motion analysis and behavi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/20
CPCG06T2207/10016G06T2207/30244G06T7/20
Inventor 陆平于慧敏邓硕郑伟伟高燕谢奕汪东旭
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products