Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video behavior identification method and device based on multi-time scale convolution

A multi-time scale, recognition method technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve the problem of low recognition accuracy, improve recognition accuracy, enhance robustness, and improve accuracy Effect

Pending Publication Date: 2021-12-31
WUHAN UNIV OF TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, it is necessary to provide a video behavior recognition method and device based on multi-time scale convolution to solve the technical problem in the prior art that the recognition accuracy of the recognition behavior through the TSM model is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video behavior identification method and device based on multi-time scale convolution
  • Video behavior identification method and device based on multi-time scale convolution
  • Video behavior identification method and device based on multi-time scale convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The technical solutions in the embodiments of the present invention will be apparent from the drawings in the embodiment of the present invention. Obviously, the described embodiments are merely the embodiments of the invention, not all of the embodiments. Based on the embodiments of the present invention, those skilled in the art do not have all other embodiments obtained by creative labor, all of which are protected by the present invention.

[0053] In the description of the present application embodiment, the meaning of "multiple" is two or more unless otherwise stated.

[0054]The "Examples" mentioned herein means that the specific features, structures, or characteristics described in connection with the embodiments may be included in at least one embodiment of the invention. This phrase is not necessarily a separate or alternative embodiment of the same embodiment in each position in the specification. Those skilled in the art is, and the embodiments described herein ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video behavior identification method and device based on multi-time scale convolution. The method comprises the following steps: constructing at least one multi-time scale convolution module; embedding the at least one multi-time scale convolution module into a preset skeleton network to form a target feature extraction model; extracting behavior features in a video through the target feature extraction model; and constructing a behavior recognition model, and recognizing the behavior characteristics through the behavior recognition model. The behavior characteristics in the video are extracted through the target characteristic extraction model, and the convolution kernel parameters in the multi-time scale convolution module can be learned and adjusted, so that the spatial-temporal characteristics of different frames of the video can be effectively extracted, and the accuracy of behavior characteristic extraction is improved. Moreover, the problem of information loss is avoided, and the accuracy of behavior feature extraction is further improved, so that the recognition accuracy of video behavior recognition can be improved.

Description

Technical field [0001] The present invention relates to the field of video behavior recognition, and more particularly to a video behavior recognition method and apparatus based on a multi-time scale convolution. Background technique [0002] Based on video information, the behavior of the computer visual field is mainly used to target video information, and the image analysis and computer vision and other technologies are mainly used to target video information, and the behavior in video information is understood and described. Based on different skeleton networks, behavioral identification networks are generally divided into 2D behavioral identification networks and 3D behavior identification networks. Based on 2D convolutional neural network typically fuses the high-dimensional features extracted by the skeleton network, but the convolutionary process lacks extraction on time and space, the behavior of the 3D convolutional neural network can be used in the volume Extract space...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06N3/045G06F18/214
Inventor 陈西江杜晓妍梁全恩吴浩韩贤权
Owner WUHAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products