Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target detection method based on deep learning

A target detection and deep learning technology, applied in the field of computer vision, can solve problems such as robustness sensitivity, low detection efficiency, and time-consuming, and achieve strong robustness, improved detection accuracy, and reduced missed and false detections The effect of the phenomenon

Pending Publication Date: 2019-12-13
UNIV OF SCI & TECH LIAONING
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This target detection method can only show good results when the background is still, but when the texture and color distribution of the background and the detected object are too uniform, the target is difficult to be detected.
2. The background subtraction method is to subtract the pixels in the current input image from the pixels of the background image to obtain the area of ​​difference between the two. This method is sensitive to the robustness of environmental changes and is only suitable for the detection of moving targets with relatively stable backgrounds.
3. Use HOG features, Haar features or SIFT features to traverse the entire image with sliding windows of different proportions, extract features from the target, and then use SVM classifier and AdaBoost classifier to classify the targets in each window. This exhaustive approach will consume a lot of time
4. The multi-scale deformable part model DPM target detection algorithm uses the improved HOG feature, based on the SVM classifier and sliding window detection ideas, and adopts a multi-component strategy for the multi-view problem of the target. This algorithm is only applicable to faces, Pedestrian detection tasks, but relatively complex, low detection efficiency
5. The SSD algorithm based on deep learning introduces the concept of multi-scale, so the SSD algorithm is not ideal for detecting small target objects
6. Faster RCNN, a deep learning detection method based on candidate regions, has a poor detection effect when detecting small targets and targets with large overlap between targets.
The performance of the target detection method based on deep learning has been greatly improved compared with the traditional detection method, but there are still several disadvantages: 1. The feature learning of the target is not complete, and the target that is too small cannot be detected.
2. Since the Faster RCNN uses the non-maximum value suppression method to eliminate the candidate frame, when detecting the target with overlapping and occlusion, there is a missed detection
3. The Faster RCNN algorithm uses the VGG16 network to extract the features of the target. Since the geometric shape of the convolution kernel for the convolution operation is fixed, the geometric structure of the network formed by its stacking is also fixed. To a certain extent The extraction of features is limited, so it cannot cope with geometric deformation well. For the targets of different states presented under different viewing angles, the execution effect of the target detection algorithm is not ideal.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target detection method based on deep learning
  • Target detection method based on deep learning
  • Target detection method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The specific embodiments provided by the present invention will be described in detail below in conjunction with the accompanying drawings.

[0039] A method for object detection based on deep learning, comprising:

[0040] 1) In the Faster RCNN method, the VGG16 network used to extract image features is replaced by a 101-layer residual with stronger expressive ability and deeper layers;

[0041] The ResNeXt network with a 101-layer structure is used to learn the target features. The ResNeXt network is an upgraded version of the ResNet network. The ResNeXt network retains the basic stacking method of the ResNet network. It is stacked by a parallel block with the same topology. Just split the path of ResNet into 32 independent paths (called "base"). These 32 paths perform convolution operations on the input image at the same time, and finally accumulate and sum the outputs from different paths as the final result. . This operation makes the division of labor in the netwo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a target detection method based on deep learning. A VGG16 network used for extracting image features in a Faster RCNN method is replaced with 101 layers of residual errors withhigher expression capability and deeper layers. The structure of the residual unit is changed into a pre-activation mode, so that the network is smoother in forward and reverse propagation processes.The most basic convolution is taken as an improvement entry point. An automatic deformable convolution is introduced. The size and the position of a convolution kernel are dynamically adjusted according to the image content which needs to be identified at present.Based on the characteristics of diversity of target forms, too small inter-class difference, unclear targets, too small targets, inter-target shielding, complex located background and the like, a deep learning algorithm Faster RCNN based on a candidate region is improved. A new target detection method is established. The target detection algorithm is high in robustness, the detection result is not affected no matter whether the target is shielded or illuminated differently or has similar backgrounds or is not clear, and the phenomena of missing detection and false detection are greatly reduced.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a method for detecting objects based on deep learning. Background technique [0002] Vision is the main way for human beings to perceive external information, and it provides a vital support for people to distinguish things within their sight. Object detection is one of the most classic research contents in computer vision technology. In the new retail industry, intelligent traffic control, intelligent highway intersection management, community security, and even in the military field of the country, object detection has a great significance. important application value. Target detection mainly refers to detecting, extracting, and segmenting targets from background information, quickly and accurately expressing and positioning targets in the input image, and laying the foundation for information reading and understanding of target behavior. Therefore, the accuracy of tar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/20G06V2201/07G06N3/045
Inventor 赵骥于海龙吴晓翎
Owner UNIV OF SCI & TECH LIAONING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products