Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An identification method for movement by human bodies irrelevant with the viewpoint based on stencil matching

A technology of human action recognition and template matching, applied in the field of human action recognition

Inactive Publication Date: 2010-01-20
ZHEJIANG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, when there is only monocular surveillance video, it is difficult to perform action recognition by constructing MHV

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An identification method for movement by human bodies irrelevant with the viewpoint based on stencil matching
  • An identification method for movement by human bodies irrelevant with the viewpoint based on stencil matching
  • An identification method for movement by human bodies irrelevant with the viewpoint based on stencil matching

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0062] An example of action recognition based on a synthetic data set:

[0063]For this dataset, there are 18 different choices for each action, and since one action has already been selected into the action template, the remaining 17 actions can be used as test cases. The algorithm of the present invention is compared with two algorithms, that is, the temporal template matching method proposed by Bobick et al. and the k-nearest neighbor (kNN) classification method. In the kNN method, we still extract the multi-viewpoint polar coordinate features of the sample action when constructing the template data, and use the nonlinear dimensionality reduction method to map it to the four-dimensional subspace, but when classifying, the method based on the hypersphere Replaced by kNN classification method. Since the method proposed by Bobick is viewpoint-dependent, 24 timing templates are constructed for this method based on the same sample actions (these sample actions are also used to ...

Embodiment 2

[0065] Example of action recognition based on public data sets:

[0066] Bobick's method in Example 1 has similar performance to the method described in the present invention, because the viewpoint of the input action was strictly restricted in the previous test to match the viewpoint of the action template. The second group of tests is to compare the generalizability of Bobick's method with the method of the present invention. We execute both algorithms on the IXMAS dataset, which is publicly available for download at INRIA PERCEPTION's site. This dataset contains 13 daily human movements, each performed 3 times by 11 actors. Actors freely change their orientation during performance to reflect viewpoint independence. So for an action, there are 33 test cases to choose from. We selected four actions of walking, punching, kicking and squatting under 5 free viewpoints in the IXMAS dataset, and calculated their motion history maps and polar coordinate features as input to the ...

Embodiment 3

[0068] Embodiment of action recognition based on real video:

[0069] We take real videos of other people in the laboratory in the school parking lot, and the moving human silhouette can be obtained by the background modeling method. Here, how to obtain meaningful "action" fragments from a time-series human silhouette collection as an algorithm input is a very critical issue. We use a segmentation algorithm based on subspace analysis to segment dynamic human movements in the time domain. Pre-segment 30 frames of human body contours (video frame rate is 30fps) every 1 second, initially extract different action segments, and calculate the motion history map and corresponding polar coordinate features as the input of the algorithm. A total of 10 tests were repeated, and the average distance between the action to be recognized and the sample action in the action template is as follows: Image 6 As shown, it can be seen that the method of the present invention is also effective fo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a view-independent human action identification method based on template matching, which can identify a plurality of pre-defined typical actions in a video. When constructing a template, a motion history image under a plurality of projection viewpoints are calculated for each sample action and polar coordinate characteristics are extracted, the polar coordinate characteristics are mapped to a low-dimensional sub-space by adopting a manifold learning method, and super balls are constituted for the sample actions in the sub-space on the basis of the low-dimensional coordinate of the multi-viewpoint polar coordinate characteristics. An action template is composed of a plurality of super balls with known ball centers and radiuses. When an unidentified action is given, the motion history image and the corresponding polar coordinate characteristics of the action are firstly calculated, then the polar coordinate characteristics are projected into the template action sub-space to obtain the low-dimensional coordinate, the distances from the coordinate to all the ball surfaces of the super balls are calculated, and the nearest super ball is selected as the identification result. The technology provided by the invention realizes the view-independent action identification and has higher application value in the video monitoring field.

Description

technical field [0001] The invention relates to video surveillance, in particular to a template matching-based viewpoint-independent human action recognition method. Background technique [0002] Some researchers equate human action recognition with three-dimensional reconstruction of the human body. They believe that if the three-dimensional human body posture corresponding to the video can be restored, the purpose of recognition will naturally be achieved. Adapt the pre-established 3D human body model to the 2D human body contour in the image, so as to realize the reconstruction of human body pose. However, recovering 3D information from arbitrary image sequences is a very complex and non-linear process, which is easily disturbed by noise and not robust. Therefore, more people are studying how to directly recognize human actions from two-dimensional videos. Ekinci et al. proposed a real-time human motion tracking and pose estimation method in video surveillance. They use...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06K9/66G06K9/00G06T17/00H04N7/18
Inventor 庄越挺肖俊张剑吴飞
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products