Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video action recognition method and system based on hybrid convolution multi-level feature fusion model

A feature fusion and action recognition technology, applied in the field of computer vision, can solve the problems of increasing model complexity and achieve the effect of reducing model complexity

Active Publication Date: 2022-05-20
CHONGQING UNIV OF POSTS & TELECOMM
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although pre-defining visual rhythm changes at the input level can significantly improve the model recognition effect, the model complexity increases significantly due to the parameter training involving multiple network branches

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video action recognition method and system based on hybrid convolution multi-level feature fusion model
  • Video action recognition method and system based on hybrid convolution multi-level feature fusion model
  • Video action recognition method and system based on hybrid convolution multi-level feature fusion model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] Embodiments of the present invention are described below through specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this specification. The present invention can also be implemented or applied through other different specific implementation modes, and various modifications or changes can be made to the details in this specification based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that the diagrams provided in the following embodiments are only schematically illustrating the basic concept of the present invention, and the following embodiments and the features in the embodiments can be combined with each other in the case of no conflict.

[0050] Wherein, the accompanying drawings are for illustrative purposes only, and represent only schematic diagrams, rather than physical drawings, and should...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a video action recognition method and system based on a multi-level feature fusion model of hybrid convolution, which belongs to the field of computer vision technology, and adopts two-dimensional convolution and separable three-dimensional convolution to construct a hybrid convolution module; along the time dimension Perform a channel shift operation on each input feature, construct a time shift module, promote the information flow between adjacent frames, and compensate for the defect of two-dimensional convolution operation in capturing dynamic features; derived from different convolution layers of the backbone network Multi-level complementary features are modulated spatially and temporally, so that the features at each level have consistent semantic information in the spatial dimension and variable visual rhythm clues in the temporal dimension; by constructing bottom-up features Flow and top-down feature flow, so that each feature complements and complements each other, and the feature flow is processed in parallel to achieve multi-level feature fusion; model training is performed using a two-stage training strategy.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and relates to a video action recognition method and system based on a hybrid convolution multi-level feature fusion model. Background technique [0002] The rapid development of the field of artificial intelligence research has prompted human-computer interaction technology to gradually penetrate into people's daily life, and the research on human action recognition derived from it has received extensive attention. In video-based action recognition tasks, traditional methods mainly rely on specific feature design, which has severe domain limitations. In order to overcome the above defects and obtain a more general feature representation, Convolutional Neural Networks (CNN) based on biological visual perception mechanisms have been widely used in the field of action recognition. [0003] The performance of the model on human action recognition is closely related to its ability to represe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/40G06V40/20G06V10/40G06V10/62G06V10/764G06V10/80G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V40/20G06V20/42G06V20/46G06V10/40G06N3/047G06N3/045G06F18/2415G06F18/253
Inventor 张祖凡彭月甘臣权张家波
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products