Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-scale small object detection method based on deep-learning hierarchical feature fusion

A technology of feature fusion and deep learning, applied to instruments, character and pattern recognition, computer components, etc., can solve problems such as size constraints, low detection accuracy, and difficulty in small object detection, and achieve real-time performance, recognition rate and The effect of improved positioning accuracy

Active Publication Date: 2017-11-10
HARBIN INST OF TECH
View PDF4 Cites 107 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention mainly solves the shortcomings of the existing object detection in the real scene that the detection accuracy is very low, and the detection of small objects is very difficult due to the constraints of the scale, and proposes a multi-scale small object detection method based on feature fusion between deep learning levels

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-scale small object detection method based on deep-learning hierarchical feature fusion
  • Multi-scale small object detection method based on deep-learning hierarchical feature fusion
  • Multi-scale small object detection method based on deep-learning hierarchical feature fusion

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0024] Specific Embodiment 1: The multi-scale small object detection method based on feature fusion between deep learning levels in this embodiment is characterized in that it includes:

[0025]Step 1. Use the pictures of the real scene database as training samples; each picture in the training samples has a pre-set marker position and category information; the marker position is used to indicate the position of the object to be recognized, and the category information is used to indicate the position of the object to be identified The kind of object.

[0026] Step 2. Initialize the candidate region generation network in the Resnet50 classification model trained by ImageNet, and train the candidate region generation network; during the training process, each time an input image is randomly selected from the data set as input, and the convolutional neural network is used to Generate a fusion feature map; the fusion feature map is generated by fusing multiple feature maps genera...

specific Embodiment approach 2

[0036] Specific embodiment 2: The difference between this embodiment and specific embodiment 1 is that in step 1, the training samples include: 1. Basic samples composed of MS COCO datasets; 2. Flipped samples obtained by flipping the basic samples left and right ; 3. The sample obtained by enlarging the basic sample and the flipped sample by a certain factor. The purpose of this embodiment is to make the training samples more comprehensive and rich, thereby making the recognition rate of the model higher.

[0037] Other steps and parameters are the same as in the first embodiment.

specific Embodiment approach 3

[0038] Embodiment 3: This embodiment differs from Embodiment 1 or Embodiment 2 in that in Step 2, the number of candidate regions generated by sliding convolution kernels on the fused feature map is 20,000. For each generated candidate region, if the overlap area between the candidate region and any marker position is greater than 0.55, it is considered as a positive sample, and if it is less than 0.35, it is considered as a negative sample. When calculating the loss function, 256 candidate regions are selected according to the scores of the candidate regions, and the ratio of positive and negative samples is 1:1. If the number of positive samples is less than 128, negative samples are used to fill them up. The final candidate area can be used (x 1 ,y 1 , x 2 ,y 2 ) means that x 1 ,y 1 Indicates the pixel coordinates of the upper left corner of the candidate area, x 2 ,y 2 Indicates the pixel coordinates of the upper right corner of the candidate area. Using this repre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the object verification technology in the machine vision field, and especially relates to a multi-scale small object detection method based on deep-learning hierarchical feature fusion; for solving the defects that the existing object detection is low in detection precision under real scene, constrained by scale size and different for small object detection, the invention puts forward a multi-scale small object detection method based on deep-learning hierarchical feature fusion. The detection method comprises the following steps: taking an image under the real scene as a research object, extracting the feature of the input image by constructing the convolution neural network, producing less candidate regions by using a candidate region generation network, and then mapping candidate region to a feature image generated by the convolution neural network to obtain the feature of each candidate region, obtaining the feature with fixed size and fixed dimension after passing a pooling layer to input to the full-connecting layer, wherein two branches behind the full-connecting layer respectively output the recognition type and the returned position. The method disclosed by the invention is suitable for the object verification in the machine vision field.

Description

technical field [0001] The invention relates to object verification technology in the field of machine vision, in particular to a multi-scale small object detection method based on deep learning inter-level feature fusion. Background technique [0002] Object detection is a very important research topic in the field of machine vision. It is the basic technology for advanced tasks such as image segmentation, object tracking, and behavior analysis and recognition. In addition, with the development of mobile Internet technology, the number of images and videos is increasing in an explosive manner. There is an urgent need for a technology that can quickly and accurately identify and locate objects in images and videos, so as to intelligently classify subsequent images and videos and identify key information. Obtain. Now object detection technology is widely used in modern society, such as face detection, pedestrian (object) detection in the security field, traffic sign recognit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06K9/00
CPCG06V20/41G06F18/24G06F18/253
Inventor 张永强丁明理李贤杨光磊董娜
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products