Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-scene pedestrian searching method based on depth learning

A deep learning and search method technology, applied in the information field, can solve the problems of weak feature robustness and low search application accuracy, and achieve the goal of reducing local feature loss, high actual search accuracy, and strong feature robustness Effect

Inactive Publication Date: 2016-06-01
CHINACCS INFORMATION IND
View PDF3 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There are many traditional search and comparison methods, such as based on color, texture and outline, etc., all of which use public libraries as sample libraries, and feature design is required and the robustness of features is not strong, and the accuracy of actual search applications is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-scene pedestrian searching method based on depth learning
  • Cross-scene pedestrian searching method based on depth learning
  • Cross-scene pedestrian searching method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] The present invention provides a cross-scene pedestrian search method based on deep learning. First, the image is segmented based on the image content to construct a deep network structure suitable for pedestrian search, and then the processed image is put into training to obtain a training model. Then output the ranking results according to this ranking algorithm, and finally achieve the purpose of searching for pedestrians across scenes.

[0018] see figure 1 , the specific method is as follows:

[0019] Step S101: Construct a sample library, perform size normalization and segmentation preprocessing on each picture in the sample library, and obtain corresponding upper body images and lower body images for each picture, after the above processing, the sample library includes two sets of images Sets are the upper body image set and the lower body image set respectively;

[0020] Step S102: Construct a convolutional neural network, input the upper body image set and lo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a cross-scene pedestrian searching method based on depth learning. The method comprises a step of carrying out preprocessing on each image in a sample library, a step of constructing and training a convolutional neural network, a step of extracting an upper half body local feature vector set and a lower half body local feature vector set from two groups of preprocessed image sets, and then the two local feature vector sets are fused to obtain a global feature vector, a step of carrying out preprocessing on an image to be searched, extracting an upper half body local feature vector and a lower half body local feature vector and fusing the two vectors to obtain a global feature vector, a step of orderly comparing the global feature vector corresponding to the image to be searched and the global feature vectors corresponding to the sample library images through a cosine similarity, outputting a group of similarity values, and sorting the similarity values according to a sorting algorithm. The method has the advantages that with the pedestrian images obtained in a monitoring video as the sample library, the design of features is not needed, the robustness is high, and the accuracy rate of actual searching is high.

Description

technical field [0001] The invention relates to the technical field of information technology, in particular to a deep learning-based cross-scene pedestrian search method. Background technique [0002] With the launch of the safe city strategy, more and more network surveillance cameras are installed in large squares, shopping malls, companies, hospitals, parks, schools, subway stations and other places with dense crowds and prone to public safety incidents. When an incident occurs, it is necessary to find the suspicious target person from the surveillance video images of multiple cameras, and these surveillance cameras are installed in various places with a large span, so that the staff can find the suspicious target pedestrian from multiple surveillance video images pose great challenges. There are many traditional search and comparison methods, such as based on color, texture and outline, etc., all of which use public libraries as sample libraries, and feature design i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06K9/46
CPCG06V40/103G06V10/267G06V10/44
Inventor 舒泓新蔡晓东宋宗涛王爱华
Owner CHINACCS INFORMATION IND
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products