A method for human behavior recognition based on 3D deep convolutional network

A technology of deep convolution and convolutional neural network, applied in the field of computer vision video recognition, can solve problems such as incapable of spatial scale and duration video processing, lack of behavior information, etc., to increase the scale of video training data, improve robustness, improve The effect of completeness

Active Publication Date: 2021-05-18
CHENGDU KOALA URAN TECH CO LTD
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] To sum up, the problems existing in the existing technology are: the existing 3-dimensional convolutional network exists: the network can only extract the sub-motion state; every small segment of the video belongs to the same behavior category; the existing behavior recognition network Only the sub-motion state can be extracted; every small segment of the video belongs to the same behavior category; the scale and duration of each input video segment must be fixed. processing; at the same time, the network learns short-term motion features, lacking complete behavioral information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for human behavior recognition based on 3D deep convolutional network
  • A method for human behavior recognition based on 3D deep convolutional network
  • A method for human behavior recognition based on 3D deep convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0037] For action recognition in video, the traditional method turns this problem into a multi-classification problem, and proposes different video feature extraction methods. However, traditional methods extract based on low-level information, such as from visual texture information or motion estimation in videos. Since the extracted information is single, it cannot represent the video content well, and the optimized classifier is not optimal. As a technology in deep learning, convolutional neural network integrates feature learning and classifier learning as a whole, and is successfully applied to behavior recognition in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of computer vision video action recognition, and discloses a method for human action recognition based on a 3D deep convolution network. The method first divides a video into a series of continuous video segments; then, divides the continuous video segments Input to the 3D neural network composed of convolutional computing layer and spatio-temporal pyramid pooling layer to obtain continuous video segment features; then calculate the global video features as behavior patterns through the long and short memory model. The technology of the present invention has obvious advantages. By improving the standard 3-dimensional convolutional network C3D and introducing multi-level pooling, feature extraction can be performed on video clips of any resolution and duration; at the same time, the robustness of the model to large behavior changes is improved, It is beneficial to increase the scale of video training data while maintaining video quality; embedding correlation information through each motion sub-state improves the integrity of behavior information.

Description

technical field [0001] The invention belongs to the field of computer vision video recognition, in particular to a method for human behavior recognition based on a 3D deep convolutional network. Background technique [0002] In the field of computer vision, the research on action recognition has gone through more than 10 years. As an important part of pattern recognition, feature engineering has always been dominant in the field of behavior recognition. Before deep learning, scientists Evan Laptev and Cordelia Schmid of the French computer vision institution Inria made the most outstanding contributions to the learning of behavioral features. Similar to the ILSVRC Image Recognition Challenge, the action recognition-based challenge THUMOS continues to refresh recognition records every year. The behavioral feature calculation method introduced by Inria has always been among the best. Especially in 2013, Dr. WangHeng of Inria proposed a trajectory-based behavior feature calc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06N3/045G06F18/24G06F18/214
Inventor 高联丽宋井宽王轩瀚邵杰申洪宇
Owner CHENGDU KOALA URAN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products