Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A compressed domain video action recognition method and system based on new spatio-temporal feature flow

An action recognition and new space-time technology, applied in the field of deep learning and pattern recognition, can solve problems such as high computational overhead, inability to handle scale changes, and low recognition accuracy, so as to avoid complete decoding and reconstruction, facilitate real-time application, and improve processing efficiency effect

Active Publication Date: 2020-12-01
SOUTH CENTRAL UNIVERSITY FOR NATIONALITIES
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the above-mentioned video action recognition methods have some technical problems that cannot be ignored. For the above-mentioned first method, it has a certain effect on small data sets and specific actions, but when dealing with large-scale data sets, the dense trajectory Features lack certain flexibility and scalability, so more real-time and effective classification cannot be achieved; in the second method above, if optical flow is not used, its recognition accuracy is not high, and if optical flow is used, its calculation The overhead is very high; in the above third method, the data calculation amount of the three-dimensional convolutional neural network is much larger than that of the two-dimensional convolutional neural network, thus greatly occupying the computing resources; the above fourth method is for a specific Actions have recognition effects, but their universality is too low, and they cannot deal with scale changes, so they cannot meet the basic requirements of correctly identifying multiple actions with a certain recognition accuracy guarantee.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A compressed domain video action recognition method and system based on new spatio-temporal feature flow
  • A compressed domain video action recognition method and system based on new spatio-temporal feature flow
  • A compressed domain video action recognition method and system based on new spatio-temporal feature flow

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0058] The present invention proposes a compression-domain video action recognition method based on a new spatio-temporal feature stream, which combines computer vision and compressed-domain video, proposes applying the traditional compression-domain preprocessing method to deep learning, and creates a new spatio-temporal feature stream in the compressed domain , using a convolutional neural net...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compressed domain video action recognition method using a new spatiotemporal feature stream, comprising: extracting I frame data in a compressed video sequence, motion vector data and residual data in a P frame, and combining the motion vector data and Residual data are preprocessed. The dual-channel data of the preprocessed motion vector and the single-channel data of the preprocessed residual are respectively used as R / G / B channel fusion to construct a new spatio-temporal feature image and input into the CNN convolutional neural network model for training and testing to obtain motion recognition category scores. The extracted I-frame data and the preprocessed motion vector data are respectively input into the CNN convolutional neural network model for training and testing to obtain the recognition category score of the action, and finally fuse the three data in a ratio of 2:1:1. The recognition category score of the action gets the final action recognition result. The invention can solve the technical problems of low recognition accuracy and complicated calculation process existing in the existing video action recognition method.

Description

technical field [0001] The invention belongs to the technical field of deep learning and pattern recognition, and more specifically, relates to a compressed domain video action recognition method and system based on new spatiotemporal feature streams. Background technique [0002] With the continuous increase of people's demand for artificial intelligence, video action recognition technology has become an important problem in computer vision, which has widely promoted the development of artificial intelligence. [0003] The existing video action recognition methods mainly include the following four types. The first is video action recognition based on artificial features, which mainly extracts and tracks the features of each pixel in the optical flow densely, and classifies them after encoding; the second One is a dual-stream neural network, which divides the video into two parts, space and time, and sends RGB images and optical flow images to two neural networks, and fuses ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V20/46G06V10/25G06N3/045G06F18/254
Inventor 丁昊江凯华江小平石鸿凌李成华
Owner SOUTH CENTRAL UNIVERSITY FOR NATIONALITIES
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products