Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Behavior recognition method based on deep residual network

A recognition method and residual technology, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve unclear problems

Inactive Publication Date: 2019-10-25
HANGZHOU DIANZI UNIV
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although convolutional neural networks have achieved great success in image recognition-based tasks, how to effectively model the temporal evolution of videos with deep networks remains unclear.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The present invention will be further described below in conjunction with the embodiments, so that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.

[0032] An action recognition method based on a deep residual network, which includes two phases: a training phase and a testing phase.

[0033] The training phase includes three modules: 1. Preprocessing module, the main function of this module is to obtain the original frame and optical flow of the training video; 2. Building a space-time dual-stream network module, the main function of this module is to build a spatial network and a temporal network based on ResNet; 3. Train the neural network. The main function of this module is to use the optical flow and the original frame to train the spatio-t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a behavior recognition method based on a deep residual network. A spatial network and a time network are respectively constructed by using the deep residual network, and the method comprises a training stage and a test stage: in the training stage, extracting an original frame and an optical flow of a training video, and respectively sending the original frame and the optical flow into the spatial network and the time network for training; in a test stage, extracting an original frame and an optical flow of a test video, respectively sending the original frame and the optical flow to the space and time network models obtained by training, and respectively obtaining a score of each category to which each behavior belongs by each model; and the classification scores of the two models are fused, and a final behavior category is judged through a softmax classifier. According to the method, the features effective to the current behavior can be enhanced according to the importance degree of the feature channel, and the smaller features are suppressed, so that the expression ability of the model to the input data is improved. The method has relatively high behaviorrecognition accuracy, and particularly has relatively good performance in some complex actions and actions which are relatively difficult to recognize.

Description

technical field [0001] The invention belongs to the field of computer technology, especially the field of behavior recognition technology, and relates to a method for recognizing video human behavior, in particular to a behavior recognition method based on a deep residual network (Residual Neural Network, ResNet). Background technique [0002] Video action recognition refers to the use of some algorithms to enable computers to automatically recognize actions in image sequences or videos. Firstly, effective visual feature information is extracted from image sequences and videos, and then an appropriate method is used to represent the information. Finally, a classification model is constructed to learn behaviors and realize correct recognition. [0003] Since the appearance of actions in successive frames in a video is very similar, video action recognition models require temporal reasoning about appearance. In behavior recognition, in addition to behavior appearance, complex...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06V20/41G06N3/045G06F18/214
Inventor 陈华华查永亮叶学义
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products