Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Action recognition method for high-time-sequence 3D neural network based on cavity convolution

A neural network and action recognition technology, applied in the fields of artificial intelligence and computer vision, which can solve problems such as ignoring interaction

Active Publication Date: 2019-10-15
CHINA UNIV OF GEOSCIENCES (WUHAN)
View PDF11 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But most of the previous modules ignored the interaction between channels

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action recognition method for high-time-sequence 3D neural network based on cavity convolution
  • Action recognition method for high-time-sequence 3D neural network based on cavity convolution
  • Action recognition method for high-time-sequence 3D neural network based on cavity convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific implementation manners of the present invention will now be described in detail with reference to the accompanying drawings.

[0054] Embodiments of the present invention provide an action recognition method based on dilated convolution-based high-sequence 3D neural network.

[0055] Please refer to figure 1 , figure 1 It is a flowchart of an action recognition method based on a high-sequence 3D neural network based on atrous convolution in an embodiment of the present invention, and specifically includes the following steps:

[0056] S101: Obtain a public data set, and divide the data set into a training set and a test set; the public data set includes two public data sets of UCF101 and HMDB51;

[0057] S102: Improving the three-dimensional Inception-V1 neural network model to obtain an improved three-dimensional Inception-V1 neural network ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an action recognition method for a high-time-sequence 3D neural network based on cavity convolution, and the method comprises the steps: firstly carrying out the improvement ofa three-dimensional Inception-V1 neural network model, and obtaining an improved three-dimensional Inception-V1 neural network model; dividing the public data set into a training set and a test set, and training and testing the improved three-dimensional Inception-V1 neural network model to obtain a trained high-precision three-dimensional Inception-V1 neural network model; and finally, using thetrained high-precision three-dimensional Inception-V1 neural network model to identify the action of the actual video. The method has the beneficial effects that according to the technical scheme provided by the invention, a new non-local feature gate algorithm is introduced to redefine the channel weight of the three-dimensional Inception-V1 neural network model while the high timing sequence iskept, so that the model accuracy is improved.

Description

technical field [0001] The present invention relates to the fields of artificial intelligence and computer vision, in particular to an action recognition method based on a high-sequence 3D neural network of dilated convolution. Background technique [0002] In recent years, action recognition as one of computer vision tasks has received more and more attention. With the success of deep learning methods in the fields of image classification and segmentation, behavior recognition methods have also developed from traditional manual feature extraction methods to deep learning methods, especially convolutional neural networks, and have achieved good results. [0003] The video recognition methods based on deep learning can be roughly divided into two categories, 2D CNNs and 3D CNNs. The 2D CNNs method separates the spatial and temporal information and then fuses them to obtain the final classification result. At the same time, with the help of the 2D CNNs method in the field of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04
CPCG06V40/20G06V20/40G06N3/045
Inventor 徐永洋冯雅兴谢忠胡安娜曹豪豪
Owner CHINA UNIV OF GEOSCIENCES (WUHAN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products