Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth video behavior identification method and system

A technology of deep video and recognition methods, applied in character and pattern recognition, instruments, computing, etc., can solve the problems of ignoring learning ability, reducing the strong expressive ability of CNNs convolutional features, etc., to achieve the effect of good geometric information and privacy

Active Publication Date: 2019-07-26
SHANDONG UNIV
View PDF10 Cites 55 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In addition, convolutional features are multi-channel, and different channels correspond to different feature detectors. Ignoring the different learning capabilities between feature channels and treating them equally may reduce the powerful expressiveness of CNNs convolutional features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth video behavior identification method and system
  • Depth video behavior identification method and system
  • Depth video behavior identification method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] In one or more embodiments, a deep video behavior recognition method that fuses convolutional neural networks and channel and spatiotemporal interest point attention models is disclosed, such as figure 1 As shown, the dynamic image sequence representation of the depth video is used as the input of CNNs, and the channel and spatiotemporal interest point attention model are embedded after the CNNs convolutional layer, and the convolutional feature map is optimized and adjusted. Finally, the global average pooling is applied to the adjusted convolutional feature map of the input depth video to generate a feature representation of the behavioral video, which is input into the LSTM network to capture the temporal information of human behavior and classify it.

[0062] This embodiment proposes a dynamic image sequence representation (dynamic image sequence, DIS) for the video, divides the entire video into a group of short-term segments along the time axis, and then encodes ea...

Embodiment 2

[0160] In one or more embodiments, a deep video behavior recognition system that fuses convolutional neural networks and channel and spatiotemporal interest point attention models is disclosed, including a server, which includes a memory, a processor, and a memory stored on the memory And it is a computer program that can run on a processor, and when the processor executes the program, the depth video behavior recognition method described in the first embodiment is realized.

Embodiment 3

[0162] In one or more embodiments, a computer-readable storage medium is disclosed, on which a computer program is stored. When the program is executed by a processor, the fusion convolutional neural network and channel and space-time described in Embodiment 1 are executed. A method for deep video action recognition with point-of-interest attention models.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth video behavior identification method and system, and the method comprises the steps: enabling the dynamic image sequence representation of a depth video to serve as theinput of CNNs, embedding a channel and space-time interest point attention model after a CNNs convolution layer, and carrying out the optimization adjustment of a convolution feature map, and finally, applying the global average pooling to the adjusted convolutional feature map of the input depth video, generating feature representation of the behavior video, inputting the feature representationinto an LSTM network, capturing time information of human body behaviors, and classifying the time information. According to the method, evaluation is carried out on three challenging public human body behavior data sets, experimental results show that the method can extract space-time information with identification ability, and the performance of video human body behavior identification is remarkably improved. Compared with other existing methods, the method has the advantage that the behavior recognition rate is effectively increased.

Description

technical field [0001] The invention belongs to the technical field of video-based human behavior recognition, and in particular relates to a deep video behavior recognition method and system that integrates a convolutional neural network and a channel and spatiotemporal interest point attention model. Background technique [0002] The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art. [0003] Video-based human action recognition has attracted increasing attention in the field of computer vision in recent years due to its wide range of applications, such as intelligent video surveillance, video retrieval, and elderly monitoring. Although a lot of research work has been carried out on the understanding and classification of human behavior in videos to improve the performance of action recognition, due to the interference caused by complex background environments, rich inter-behavior c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06V20/40G06F18/2193
Inventor 马昕武寒波宋锐荣学文田国会李贻斌
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products