Mask pooling model training and pedestrian re-identification method for pedestrian re-identification

A pedestrian re-identification and model training technology, applied in the field of computer vision, to achieve good training effect, improve accuracy and efficiency, and improve accuracy

Active Publication Date: 2019-07-05
SUN YAT SEN UNIV
View PDF6 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] There are many loss methods used to guide model training

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mask pooling model training and pedestrian re-identification method for pedestrian re-identification
  • Mask pooling model training and pedestrian re-identification method for pedestrian re-identification
  • Mask pooling model training and pedestrian re-identification method for pedestrian re-identification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] Such as figure 1 As shown, this embodiment provides a mask pooling model training method for pedestrian re-identification, including training steps:

[0060] S1. Obtain anchor image a, positive sample image p, and negative sample image n;

[0061] S2. Input a, p, n and the masks corresponding to a, p, n into the mask pooling model respectively, and obtain the corresponding tensor T a , T p , T n ;

[0062] S3. to T a , T p , T n Perform pooling operation and convolution operation respectively to obtain the corresponding tensor H a 、H p 、H n ;

[0063] S4. H a 、H p 、H n Enter the classifier respectively to get the corresponding prediction result R a , R p , R n ;

[0064] S5. According to the prediction result R a , R p , R n Calculate the loss value;

[0065] S6. Train the mask pooling model according to the loss value.

[0066] The traditional triple loss method requires an image with 3 input channels: anchor image a, positive sample image p, and ...

Embodiment 2

[0122] A pedestrian re-identification method, comprising: inputting a pedestrian image to be recognized into a mask pooling model, and the mask pooling model is trained by using the mask pooling model training method as described in Embodiment 1.

[0123] Through the mask pooling model described in Example 1, background features can be gradually removed and the most critical pedestrian features can be obtained. Such as Figure 4 Shown is the result of removing background features from pedestrian images. It can be seen that the outline features of pedestrians have been preserved to the greatest extent, and the cluttered background has been effectively removed, improving the accuracy and efficiency of pedestrian re-identification.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a mask pooling model training and pedestrian re-identification method for pedestrian re-identification. The method comprises the steps of S1, obtaining an anchor image a, a positive sample image p and a negative sample image n; S2, respectively inputting a, p, n and masks corresponding to a, p and n into the mask pooling model to obtain corresponding three-dimensional tensors Ta, Tp and Tn; S3, respectively carrying out pooling operation and convolution operation on Ta, Tp and Tn to obtain corresponding Ha, Hp and Hn; S4, inputting Ha, Hp and Hn into a classifier respectively to obtain corresponding prediction results Ra, Rp and Rn; S5, calculating loss values according to the prediction results Ra, Rp and Rn; and S6, training the mask pooling model according to the loss value. According to the invention, the non-background information in the image can be enhanced, and the most critical feature of the image can be learned.

Description

technical field [0001] The present invention relates to the technical field of computer vision, and more specifically, to a mask pooling model training for pedestrian re-identification and a pedestrian re-identification method. Background technique [0002] Person re-identification (Person Re-identification, Person ReID), also known as person re-identification, is a technology that uses computer vision technology to determine whether a specific pedestrian exists in an image or video sequence, and is widely considered to be a sub-problem of image retrieval. Person ReID refers to given a monitored pedestrian image, retrieves the pedestrian image under cross-device, aims to make up for the visual limitations of the current fixed camera, and can be combined with pedestrian detection / pedestrian tracking technology, and can be widely used in intelligent video surveillance , intelligent security and other fields. [0003] Due to the differences between different camera equipment, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/103G06F18/24G06F18/214
Inventor 卢宇彤蔡婷婷郑馥丹王莹邓楚富陈志广
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products