Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real-time human body action recognizing method and device based on depth image sequence

A human action recognition and depth image technology, applied in the field of pattern recognition, can solve the problems of normalization deviation, affecting the recognition accuracy, and the recognition efficiency needs to be improved

Active Publication Date: 2013-08-14
TSINGHUA UNIV
View PDF4 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] At present, the research on action recognition of depth images is still very limited, and there is a common shortcoming in the existing technology: since the extracted features are related to the absolute coordinates of the human body area, normalization must be carried out before recognition, and the target human body needs to be accurately detected. position and size in the image
However, in practical applications, the user's movement is very random, especially complex movements may be accompanied by body translation, tilt or height changes, etc., which often lead to normalized deviations, which in turn affect the recognition accuracy. ; Moreover, the recognition efficiency of the deep image action recognition method in the prior art still needs to be improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time human body action recognizing method and device based on depth image sequence
  • Real-time human body action recognizing method and device based on depth image sequence
  • Real-time human body action recognizing method and device based on depth image sequence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] The specific implementation manner of the present invention will be further described below in conjunction with the drawings and embodiments. The following examples are only used to illustrate the present invention, but not to limit the scope of the present invention.

[0061] Flowchart such as figure 1 A real-time human action recognition method based on a depth image sequence is shown, which mainly includes steps:

[0062] S1. From the target depth image sequence collected by hardware devices such as depth cameras, through background modeling, image segmentation and other technologies, accurately segment the human body area, and extract the target action silhouette R, for example, as figure 2 shown in the first column; and extract training action silhouettes from the training depth image set in the same way.

[0063] S2. Perform posture clustering on the training action silhouettes, and perform action calibration on the clustering results; that is, classify each po...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of mode recognizing, in particular to a real-time human body action recognizing method and device based on depth image sequence. The method comprises the steps of S1, extracting target action sketch from a target depth image sequence and extracting a training action sketch from a training depth image set; S2, performing gesture clustering on the training action sketch and performing action calibrating on the clustered outcome; S3, computing the gesture characteristics of the target action sketch and training action sketch; S4, performing the gesture training based on a Gauss mixing model by combining the gesture characteristics of the training action sketch and constructing a gesture model; S5, computing the transferring probability among all gestures of the clustered outcome in each action and constructing an action image model; and S6, performing action recognizing on the target depth image sequence according to the gesture characteristics of the target action sketch, the gesture model and the action image model. The real-time human body action recognizing method disclosed by the invention has the advantages of improving the efficiency of action recognizing and the accuracy and the robustness of the action recognizing.

Description

technical field [0001] The invention relates to the technical field of pattern recognition, in particular to a method and device for real-time human action recognition based on depth image sequences. Background technique [0002] With the development of modern information technology in the direction of intelligence and humanization, various human-computer interaction, virtual reality, and intelligent monitoring systems have emerged one after another. Technologies such as human body pose estimation, action recognition, and behavior understanding based on computer vision play an important role in it. In recent years, the release of Microsoft's Kinect depth camera has greatly reduced the cost of obtaining real-time 3D information of the scene, and has also provided more possibilities for motion recognition related fields. However, due to the non-rigidity of the human body, the diversity of motion patterns, and the randomness of displacement, real-time and robust human motion r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00
Inventor 王贵锦李艳丽何礼林行刚
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products