Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep-learning-based multi-target pedestrian detection and tracking method

A technology of pedestrian detection and deep learning, applied in the field of computer vision, can solve the problems of pedestrians not being detected, re-identified, and obstacles blocked without processing, and achieve the effect of improving the tracking effect and the matching effect.

Active Publication Date: 2018-01-09
BEIHANG UNIV
View PDF4 Cites 54 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] At the same time, the different distances between the tracking pedestrian and the camera lead to changes in the size of the image in the camera, and most of the existing methods rarely consider the problem of pedestrian imaging changes, such as Struck algorithm, CT algorithm, KCF algorithm, ASLA algorithm, etc., which will lead to a decrease in the accuracy of the tracking frame and affect the pedestrian tracking effect
Some of the accuracy problems caused by considering the scaling problem are to obtain the more accurate size of pedestrians through pedestrian detection, and then do association matching. However, this kind of method usually needs to give the type of pedestrians, and often when pedestrians are occluded. Not detected or detected incorrectly
[0007] In practical applications, since the occlusion problem is mostly related to the semantics of the environment, most of the current pedestrian tracking algorithms do not deal with the occlusion of obstacles. After complete occlusion, pedestrians are lost and cannot be re-identified. Obscured wide field of view, frequently lost track

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep-learning-based multi-target pedestrian detection and tracking method
  • Deep-learning-based multi-target pedestrian detection and tracking method
  • Deep-learning-based multi-target pedestrian detection and tracking method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

[0048] Such as figure 1 As shown, a kind of deep learning-based multi-target pedestrian detection and tracking method of the present invention comprises the following steps:

[0049] Step 1: Perform multi-target pedestrian detection and joint point extraction for the input video frame sequence, and save the obtained position information and joint point information as the input for the next stage. Specifically, it is implemented through the following steps:

[0050] 1.1 First, in the preparatory stage of this method, a ResNet-based pedestrian detection convolutional neural network is trained to extract the visual features of the video frame. The convolutional network uses such as figure 2 The repetition of the unit, x is the input of the neural network signal, x is added to x itself after passing through the two-layer neural network, and then ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep-learning-based multi-target pedestrian detection and tracking method. The method comprises the following steps: step one, carrying out multi-target pedestrian detectionand joint point extraction on an inputted video and storing obtained position information and joint point information as inputs of a next stage; step two, selecting a key frame at an interval with thecertain number of frames and carrying out apparent characteristic extraction in a pedestrian in the key frame; to be specific, according to the obtained position information and joint point information, extracting upper body part attitude characteristics and color histogram characteristics respectively for pedestrian association between key frames; and step three, carrying out continuous trackingon the pedestrians in the key frames with a threshold slow starting strategy, a block matching rate model detection algorithm, a historical state keeping voting algorithm and a shielding detection method for tracking effect improvement, returning to the step one after tracking ending, and detecting the key frames again to ensure stability of the method.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a multi-target pedestrian detection and tracking method based on deep learning. Background technique [0002] With the development of computer computing power, computer vision technology can be gradually applied in real life to facilitate people's life. In the field of video surveillance, one of the most important tasks in surveillance video is to detect people and interpret their behavior. We need to know if and where a given target is present in the surveillance system. Identifying pedestrians in video sequences is a pedestrian detection problem. Considering the spatial-temporal correlation, the problem of identifying and tracking a target that appears in different surveillance videos in surveillance video is called target tracking. Pedestrian detection technology is the basis of pedestrian tracking. The video frames in the surveillance video have proble...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/66G06N3/04
Inventor 周忠吴威孙晨新姜那李鹤兮
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products