Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A pedestrian re-identification method based on deep learning and overlapped image inter-block measurement

A technology of overlapping images and deep learning, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve problems such as interference, limited recognition performance, and large impact on recognition performance, so as to improve the recognition rate and improve recognition. The effect of performance, good robustness

Pending Publication Date: 2019-05-28
TONGJI UNIV
View PDF7 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method cannot accurately segment the foreground and background regions of pedestrian images, and region segmentation errors have a great impact on recognition performance.
[0008] Patent 108171184A proposes a method for pedestrian re-identification based on the Siamese network, which also applies the ResNet-50 network as a feature extraction network, but this method is easily disturbed by pedestrian image background, noise, occlusion and other problems during the feature extraction process, and the recognition performance restricted

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A pedestrian re-identification method based on deep learning and overlapped image inter-block measurement
  • A pedestrian re-identification method based on deep learning and overlapped image inter-block measurement
  • A pedestrian re-identification method based on deep learning and overlapped image inter-block measurement

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0063] In order to make the object, technical scheme and advantages of the present invention clearer, below in conjunction with embodiment, specifically as figure 1 The shown algorithm flow chart further describes the present invention in detail. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention.

[0064] Step 1: Construct a deep learning neural network model based on multiple overlapping feature images, specifically described as follows: The deep learning neural network model based on multiple overlapping feature images is sequentially connected by a feature extraction module, a multi-block overlapping feature image module, and a multi-classification module Combination formation is mainly used to extract robust and highly discriminative features from pedestrian images. The deep learning neural network model based on multi-block overlapping feature images adds a multi-bl...

specific Embodiment approach

[0119] figure 1 It is the flow chart of OBM algorithm implementation of the present invention, and the specific implementation is as follows:

[0120] 1. Build a feature extraction module;

[0121] 2. Build multiple overlapping feature image modules;

[0122] 3. Build multi-category modules;

[0123] 4. Construct a multi-classification loss function ClassificationLoss;

[0124] 5. Construct a metric loss function OverlapBlocksLoss between overlapping image blocks;

[0125] 6. Weighted ClassificationLoss and OverlapBlocksLoss loss functions to obtain the final model training loss function;

[0126] 7. Adjust the image size of all training sets to 384*128;

[0127] 8. The training batch size (Batch Size) is set to 64, and the training cycle (Epoch) is set to 80;

[0128] 9. Repeatedly input training images for model training, calculate the loss value based on the model training loss function based on the measurement between overlapping image blocks, and use the stochastic...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a pedestrian re-identification method based on deep learning and measurement between overlapped image blocks. The pedestrian re-identification method comprises the following steps of 1) constructing a deep learning neural network model based on a plurality of overlapped feature images; 2) constructing a model training loss function based on the inter-block measurement of the overlapped image; 3) training the model by using the training sample; 4) inputting the pedestrian image to be identified and all comparison images into the model to obtain image features; 5) obtaining a final distance between the pedestrian image to be identified and the comparison image by using a Euclidean distance calculation formula, and 6) sorting the distance to obtain a comparison imagelibrary matching sequence corresponding to the pedestrian to be identified. Compared with the prior art, the method has the advantages of high accuracy, good robustness and the like.

Description

technical field [0001] The invention relates to the field of intelligent analysis of surveillance video, in particular to a pedestrian re-identification method based on deep learning and inter-measurement between overlapping image blocks. Background technique [0002] Pedestrian re-identification refers to the technology of matching pedestrians under different camera perspectives in a video surveillance scene shot by multiple cameras. It is a key technology in pedestrian identification, pedestrian trajectory tracking, and pedestrian search. A popular research object in the field of vision. However, due to the interference of environmental factors such as light, viewing angle, and occlusion in pedestrian images under different camera angles, traditional feature extraction algorithms cannot extract the preferred features of images and express them through better semantics, resulting in limited recognition rates; traditional metric learning algorithms are affected by Limited b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04
Inventor 赵才荣陈亦鹏
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products