Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human body action recognition method fusing attention mechanism and space-time diagram convolutional neural network under security scene

A convolutional neural network and human action recognition technology, applied in the field of human action recognition, can solve problems such as difficulties in data collection and labeling, low frequency of abnormal actions, limited expression ability, etc., to achieve strong generalization ability and enhanced robustness , the effect of strong expressive ability

Active Publication Date: 2019-08-13
FUZHOU UNIV
View PDF0 Cites 45 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] In view of the above problems, the present invention provides a human action recognition method that integrates the attention mechanism and the spatio-temporal graph convolutional neural network in the security scene, which solves the low frequency of abnormal actions in the security scene and the difficulties in data collection and labeling; the traditional skeleton construction Modular methods usually rely on handcrafted parts or traversal rules, resulting in limited expressiveness and difficulty in generalization; traditional motion description methods such as 3DHOG, motion vectors, and dense trajectories have problems such as low efficiency and low accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body action recognition method fusing attention mechanism and space-time diagram convolutional neural network under security scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] In order to make the features and advantages of this patent more obvious and understandable, the following specific examples are given in conjunction with the drawings, and detailed descriptions are as follows:

[0053] Such as figure 1 As shown, the overall process of this embodiment includes the following steps:

[0054] Step S1: Randomly divide the acquired human motion analysis data set in the security scene into a training set and a verification set;

[0055] In this embodiment, the step S1 specifically includes:

[0056] Step S11: adopt self-built or download public security field data sets; uniformly process the obtained video data, scale the size to 340*256, and adjust the frame rate to 30 frames per second;

[0057] Step S12: The data set is randomly divided into a training set and a validation set according to a ratio of 100:1.

[0058] Step S2: Perform data enhancement processing on the video data of the training set and the verification set;

[0059] In this embodimen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a human body action recognition method fusing an attention mechanism and a space-time diagram convolutional neural network under a security scene, and the method comprises the steps: firstly carrying out the random division of an obtained human body action analysis data set under the security scene, and dividing the human body action analysis data set into a training set anda verification set; secondly, performing data enhancement processing on video data of the training set and the verification set; carrying out key frame screening on the obtained and enhanced data setby utilizing an attention mechanism; transcoding and labeling the screened key frame video by using a human body posture estimation model framework to prepare for training a human body motion detection and recognition model; and finally, constructing a space-time skeleton map convolutional neural network model, training by using the training set, carrying out network parameter weight optimizationby using random gradient descent, and predicting the accuracy of the neural network model by using the verification set. The method not only can enlarge the original action data volume, but also canenhance the robustness of the model, thereby improving the final action recognition accuracy.

Description

Technical field [0001] The invention relates to the field of pattern recognition and computer vision, in particular to a human action recognition method that combines an attention mechanism and a spatiotemporal graph convolutional neural network in a security scene. Background technique [0002] Vision has always been the most important and intuitive way for humans to obtain information from the outside world. According to relevant statistics, 80% of information obtained by humans is through vision. With the continuous increase in the quality of image sensors such as cameras and the continuous decline in prices, image sensors have been deployed and applied on a large scale, resulting in massive amounts of information every day. Relying solely on the eyes to obtain the required information can no longer meet people's requirements for new information and knowledge. In addition, with the increase in computer computing speed, the further enhancement of computing power, and the conti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/23G06V20/41G06N3/045
Inventor 柯逍柯力
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products