Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Training method of multi-moving object action identification and multi-moving object action identification method

A multi-moving target and moving target technology, which is applied to the training method and recognition field of multi-moving target action behavior recognition, can solve the problems of neglect, lack of effective methods for group action behavior classification, and poor expression ability of action behavior patterns. The effect of good recognition effect

Inactive Publication Date: 2010-10-20
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF0 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The first feature is that the existing methods mainly analyze the actions of a small number of targets, such as classifying and identifying the actions of a single person, identifying the interactive behavior of two people, etc., and lacking the classification of group actions of more than three people. effective method
This feature makes the existing methods ineffective in classifying and recognizing group action behaviors.
For example, the existing square monitoring system can currently identify individual behaviors, but it is still unable to deal with multi-person behaviors, such as group fights
The second feature is that the existing methods do not fully consider the uncertainty of the action behavior itself when modeling the movement behavior, and the ability to express the action behavior pattern is not strong, and cannot be applied to multi-person behaviors and other intra-class differences. Behavioral patterns are described and categorized
However, this method ignores the information on the individual level, and only relying on the relationship constraints of the spatial structure cannot adapt to the characteristics of multi-person behaviors.
Also proposed in reference 2 "Learning Group Activity in Soccer Videos from Local Motion, Yu Kong, Weiming Hu, Xiaoqin Zhang, Hanzi Wang, and Yunde Jia, LECTURE NOTES IN COMPUTER SCIENCE, Asian Conference on Computer Vision (ACCV), 2009" The method of group behavior recognition by using local features, but this method only uses local appearance features as the basis, and cannot describe multi-person behavior patterns from a higher semantic level

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method of multi-moving object action identification and multi-moving object action identification method
  • Training method of multi-moving object action identification and multi-moving object action identification method
  • Training method of multi-moving object action identification and multi-moving object action identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be described below in conjunction with the accompanying drawings and specific embodiments.

[0040] In the current video, people are the main moving objects. Therefore, in the process of describing the multi-moving object action behavior training and recognition method of the present invention, people are taken as an example to illustrate related methods. Since the method of the present invention is to identify the actions of multiple people in the video, the video to be processed generally should include multiple people.

[0041] refer to figure 1, in step S1, extract the motion trajectory information of each person from the video data containing the actions of multiple people. It is common knowledge of those skilled in the art to extract personal movement trajectory information from videos. Related methods in the prior art are used, such as detecting and tracking moving objects in the video respectively, so as to obtain the movement traject...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a training method of multi-moving object action identification, comprising the following steps of: extracting the movement track information of each moving object from video data; layering the movement track information of the moving objects; modeling for the movement mode of the multi-moving object action on each layer; carrying out characteristic description on the model of the movement mode by synthesizing the overall and local movement information in a video, wherein the characteristic at least comprises a three-dimensional hyper-parameter vector for describing the movement track by using a gaussian process; and training a grader according to the characteristic. The invention also provides a multi-moving object action identification method which identifies the multi-moving object action in the video by utilizing the grader obtained by using the training method. In the invention, the movement track of an object is represented by using the gaussian process from a probability angle, and a model is established for a multi-people action mode from three granularity layers, and the characteristics are extracted, which makes the representation of the multi-people action more practical.

Description

technical field [0001] The invention relates to the field of content-based video analysis and action recognition, in particular to a training method and recognition method for multi-moving target action behavior recognition. Background technique [0002] With the development and application of information technology, more and more digital content, especially video data, is continuously produced. These video data contain rich semantic information. How to effectively explore and utilize this information is a frontier research direction in the field. . [0003] Video content is usually composed of a large number of objects and their motion behaviors. Analyzing and understanding these motion behaviors is an important part of video content analysis. With the widespread deployment of video surveillance systems, the demand for behavior analysis and recognition in videos is also increasing, and the requirements for the difficulty and accuracy of analyzing and identifying content ar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/66G06T7/20
Inventor 黄庆明成仲炜秦磊蒋树强
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products