Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Cross-camera pedestrian detection tracking method based on depth learning

A pedestrian detection and cross-camera technology, which is applied in the fields of computer vision and video analysis, can solve problems such as viewing angle changes, scale changes, and illumination changes, and achieve the effects of increasing speed, satisfying real-time monitoring, and improving tracking accuracy

Active Publication Date: 2018-11-23
WUHAN UNIV
View PDF17 Cites 132 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The purpose of the present invention is to overcome the problems of target occlusion and cross-camera illumination changes, viewing angle changes, scale changes, etc., and propose a cross-camera pedestrian detection and tracking method based on deep learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-camera pedestrian detection tracking method based on depth learning
  • Cross-camera pedestrian detection tracking method based on depth learning
  • Cross-camera pedestrian detection tracking method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0092] Embodiment specifically comprises the following steps:

[0093] Step S41, assuming that for a disappearing target, N-1 candidate images are obtained through pedestrian detection, and the input of the pedestrian re-identification module is an image of the disappearing target passed in by the target tracking module and N-1 images passed in by the pedestrian detection module Candidate images, for each image, first pass through the first layer (lower layer) of the pedestrian detection network to obtain the shallow feature map, and then use the saliency detection algorithm to extract the saliency of the target to remove the redundant information in the background and send it to the deep volume The product layer, in the fifth layer (high layer) output to get the deep feature map. To fuse the shallow feature map and the deep feature map, the deep feature map can be upsampled to the same size as the shallow feature map, and then connected together, so the number of channels is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-camera pedestrian detection tracking method based on depth learning, which comprises the steps of: by training a pedestrian detection network, carrying out pedestrian detection on an input monitoring video sequence; initializing tracking targets by a target box obtained by pedestrian detection, extracting shallow layer features and deep layer features of a region corresponding to a candidate box in the pedestrian detection network, and implementing tracking; when the targets disappear, carrying out pedestrian re-identification which comprises the process of: after target disappearance information is obtained, finding images with the highest matching degrees with the disappearing targets from candidate images obtained by the pedestrian detection network and continuously tracking; and when tracking is ended, outputting motion tracks of the pedestrian targets under multiple cameras. The features extracted by the method can overcome influence of illuminationvariations and viewing angle variations; moreover, for both the tracking and pedestrian re-identification parts, the features are extracted from the pedestrian detection network; pedestrian detection, multi-target tracking and pedestrian re-identification are organically fused; and accurate cross-camera pedestrian detection and tracking in a large-range scene are implemented.

Description

technical field [0001] The invention belongs to the technical fields of computer vision and video analysis, and in particular relates to a method for detecting and tracking pedestrians across cameras based on deep learning. Background technique [0002] With people's emphasis on public safety issues and the rapid increase in the number and coverage of surveillance cameras, intelligent multi-camera surveillance is playing an increasingly important role. Pedestrians, as the subject of monitoring, not only have the commonality of general targets, but also have diversity within the class, which is the difficulty of pedestrian detection and tracking. Cross-camera pedestrian detection and tracking refers to the detection and tracking of pedestrian targets under multiple cameras. When a target leaves the field of view of the current camera, the target can be quickly retrieved in the adjacent camera area for continuous omni-directional tracking, and finally an effective pedestrian ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/23G06V20/52G06N3/045G06F18/253G06F18/214
Inventor 陈丽琼田胜邹炼范赐恩杨烨胡雨涵
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products