Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video human body behavior recognition method based on CNN and accumulated hidden layer state ConvLSTM

A recognition method and state technology, applied in character and pattern recognition, neural learning methods, instruments, etc., can solve the problems of lack of time information modeling ability, reduce data utilization, increase computing cost, etc., to solve convergence difficulties, identify The effect of improving accuracy and saving time and cost

Pending Publication Date: 2022-01-25
DALIAN NATIONALITIES UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

CNN has a powerful feature extraction ability, and now CNN is very proficient in the application of various computer vision, but the two-dimensional convolutional neural network (referred to as 2D CNN) lacks the ability to model time information, and the three-dimensional convolutional neural network (referred to as 3D CNN), as a natural extension of 2D CNN, has the ability to model temporal and spatial information simultaneously, but it also brings too much computational cost
ConvLSTM, as a variant of Long Short-term Memory (LSTM for short), is very effective for data with sequence characteristics, and video data happens to have sequence characteristics, but in previous work, most of them used stacked ConvLSTM, which has difficulty in convergence, also brings an increase in computational cost, and ConvLSTM only pays attention to the state of the hidden layer at the previous moment, and does not pay attention to the hidden layer at a longer distance, resulting in data redundancy and reducing the utilization rate of data.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video human body behavior recognition method based on CNN and accumulated hidden layer state ConvLSTM
  • Video human body behavior recognition method based on CNN and accumulated hidden layer state ConvLSTM
  • Video human body behavior recognition method based on CNN and accumulated hidden layer state ConvLSTM

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] see figure 1 , is a flowchart of the method of the present invention.

[0028] In order to realize the overall network architecture of the present invention: first, use the two-dimensional convolutional neural network (2D CNN) pre-trained on ImageNet as the backbone model, and insert the accumulated hidden layer state ConvLSTM ( AH-ConvLSTM) module is used to build the overall network; secondly, obtain the data sample set including video data and labels, and divide it into training sample set and test sample set, use segment sampling method to sample the video, and sample the obtained The frame is sent to the overall network as input; the learning rate is set to train the overall network, the training data uses the training sample set, and the AH-ConvLSTM module is trained with 5 times the learning rate set to accelerate the convergence speed of the overall network; use The classifier generates a recognition score, and updates the network parameters through backpropaga...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video human body behavior recognition method based on a CNN and an accumulated hidden layer state ConvLSTM, and the method comprises the steps: employing a 2D CNN which is pre-trained on an Image Net as a backbone model, and inserting an AH-ConvLSTM module behind a plurality of fixed network layer positions in the CNN to construct an overall network; acquiring a data sample set containing video data and labels, and dividing the data sample set into a training sample set and a test sample set; sampling a video in a fragment sampling mode, and using frames obtained through sampling as input to be sent to the whole network; setting a learning rate to train the whole network, and using the training sample set as training data; using a classifier for generating recognition scores of all categories, updating overall network parameters through back propagation, and storing the parameters as weight files; and initializing the whole network by using the weight file with the highest verification accuracy, performing fragment sampling on video frames of the test sample set, and inputting the frames obtained by sampling into the whole network to learn spatio-temporal information in the video so as to obtain an identification result.

Description

technical field [0001] The present invention relates to the field of human behavior recognition in videos, in particular to a convolutional long-short-term memory (Convolutional Long Short-term Memory, abbreviated as ConvLSTM) based on convolutional neural networks (Convolutional Neural Networks, abbreviated as CNN) and accumulated hidden layer states. ) video human behavior recognition method. Background technique [0002] Video action recognition is one of the representative tasks of video understanding. Thanks to the advent of deep learning, video action recognition has made great progress, but it has also encountered new challenges. Modeling remote temporal information in videos results in high computational cost and variability due to differences in datasets and evaluation protocols. One of the most important tasks in video understanding is to understand human behavior, which has many practical application scenarios, including behavior analysis, video retrieval, human...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V20/40G06V10/82G06N3/04G06N3/08
CPCG06N3/08G06N3/044G06N3/045
Inventor 张建新王振伟张冰冰董微
Owner DALIAN NATIONALITIES UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products