Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Safety helmet wearing inspection method based on YOLOv3 algorithm

An inspection method and helmet technology, applied in neural learning methods, computing, computer parts, etc., can solve the problems of insufficient detection accuracy, inaccurate detection results, and difficulty in image feature extraction, so as to improve efficiency and accuracy, and improve iteration. The effect of computational efficiency

Active Publication Date: 2021-07-20
CHENGDU AIRCRAFT INDUSTRY GROUP
View PDF10 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Because the size of the helmet worn by the staff is smaller than that of the human body and the scene, the traditional detection of helmet wearing based on the YOLO algorithm has the problems of difficult image feature extraction and high sensitivity of feature extraction.
At the same time, due to the large area of ​​the construction site such as the workshop, this results in the small size of features such as helmets or human bodies in the collected images, which further causes difficulty in feature extraction and insufficient accuracy of feature extraction when extracting features of helmets or human bodies based on the traditional YOLO algorithm. problems, resulting in inaccurate final test results
[0004] Aiming at the defects of difficulty in feature extraction and insufficient detection accuracy in the traditional image detection based on YOLO algorithm to judge whether a worker wears a helmet, the present invention discloses a method for checking helmet wearing based on YOLOv3 algorithm

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Safety helmet wearing inspection method based on YOLOv3 algorithm
  • Safety helmet wearing inspection method based on YOLOv3 algorithm
  • Safety helmet wearing inspection method based on YOLOv3 algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] A kind of safety helmet wearing inspection method based on YOLOv3 algorithm of the present embodiment, such as figure 1 As shown, the specific process steps are as follows:

[0052] Before labeling the sample pictures, it is necessary to construct a sample image set. The sample image set is made by collecting pictures of workshop staff wearing hard hats, pictures of staff not wearing hard hats, and scene pictures without human bodies. The sample image set The collection steps are as follows:

[0053] Step a. Collect the video of the staff wearing safety helmets in the workshop through the camera, and frame the video to obtain pictures; further, extract a picture from the video every 10 frames, and the extracted pictures include the standard wearing of the staff Pictures of hard hats and scenes without human bodies;

[0054] Step b. Use the camera to collect the video of the staff in the workshop without wearing a helmet, and frame the video to obtain a picture; that i...

Embodiment 2

[0065] This embodiment is further optimized on the basis of Embodiment 1. In the traditional YOLOv3 algorithm, the mean square error is generally used as the target positioning loss function to perform the regression of the detection frame, but the loss function based on the mean square error is very sensitive to scale information. Its partial derivative value will become very small when the output probability value is close to 0 or close to 1, which will easily cause the partial derivative value at the beginning of YOLOv3 model training to almost disappear, which will cause slow or even stagnant model training.

[0066] Therefore, in the present invention, the loss function is improved, and the logarithmic loss of the IoU value between the scale prediction frame and the real target frame is used to construct a loss function to measure the similarity between the scale prediction frame and the real target frame, effectively avoiding the loss function Sensitivity to scale informa...

Embodiment 3

[0077] This embodiment is further optimized on the basis of the above-mentioned embodiment 1 or 2. The feature network in the YOLOv3 model uses the multi-scale target detection network Darknet-53 for feature extraction, and the multi-scale target detection network Darknet-53 structure contains alternating occurrences of The 1×1 and 3×3 convolutional layers use the residual structure and the full convolutional network to solve the problem of gradient disappearance under the deep network, which reduces the difficulty of network training. In the original Darknet-53 network, the Softmax classifier is used to obtain the final output. In the improved multi-scale target detection network Darknet-53, the present invention expands the 3-scale detection of the original algorithm to 4-scale detection, performs 6 double downsampling and convolution in turn, and passes the 3rd, 3rd The 4th, 5th, and 6th times of double downsampling respectively obtain feature maps with sizes of 104×104 pix...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a safety helmet wearing inspection method based on a YOLOv3 algorithm, and the method comprises the steps of carrying out the expansion of the detection scale of a YOLOv3 model, obtaining a feature graph with a larger size, and solving a problem that the detection precision is insufficient or the feature extraction is difficult when the features of a safety helmet are smaller. Meanwhile, a loss function in the YOLOv3 is improved, and the loss function is established based on an IoU value logarithmic loss between a scale detection frame and a real target frame, so that the problem that the loss function in a traditional YOLOv3 model is sensitive to scale information is avoided, and the efficiency and stability of iterative calculation of the YOLOv3 model are greatly improved; finally, the detection and extraction efficiency of the safety helmet features is higher, and the precision is higher.

Description

technical field [0001] The invention belongs to the technical field of image detection and recognition, and in particular relates to a safety helmet wearing inspection method based on the YOLOv3 algorithm. Background technique [0002] In order to protect the safety of the staff, the staff must wear safety helmets when entering the workshop, but in actual situations, due to the negligence of the staff, they often do not wear safety helmets. In workshops or construction sites, safety inspectors are often set up to inspect whether workers wear helmets, but the above-mentioned inspection methods have regional restrictions and are inefficient. [0003] In the prior art, the technology of image target detection is also used to detect whether the staff is wearing a safety helmet. At present, image detection based on the YOLO algorithm is widely used. However, in the process of image detection based on the traditional YOLO algorithm, image features need to be extracted. Since the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/103G06V10/25G06N3/045G06F18/23213G06F18/214
Inventor 刘倍铭王飞扬张国峰曾璐遥方亿宁斯岚
Owner CHENGDU AIRCRAFT INDUSTRY GROUP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products