Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-visual-angle action recognition method

An action recognition and multi-view technology, applied in the field of computer vision, can solve the problems of poor recognition effect, complex motion can not get good effect, change of motion duration and noise sensitivity, etc., achieve high accuracy and realize effective The effect of identifying and increasing the degree of discrimination

Active Publication Date: 2015-01-07
BEIJING UNIV OF POSTS & TELECOMM
View PDF1 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0012] Because the existing technology has poor behavior recognition effect on subtle differences, is sensitive to changes in motion duration and noise, and adapts to observation sequences with dependencies, and cannot get good results for more complex motions, only one dimensional linear conditional random field, the present invention proposes a multi-view action recognition method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-visual-angle action recognition method
  • Multi-visual-angle action recognition method
  • Multi-visual-angle action recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The structure of the present invention will be explained in detail below in conjunction with the accompanying drawings.

[0041] A multi-view action recognition method includes two processes of action training and action recognition.

[0042] Such as figure 1 As shown, the action training process includes the following steps:

[0043] X1: Manually mark the training video files, with a total of 4 viewing angles and 10 types of actions;

[0044] X2: To extract spatio-temporal interest points from the training video files, the present invention adopts methods such as Gaussian filtering and Gabor filtering;

[0045] X3: Calculate the feature descriptor of the area where the spatio-temporal interest point is located. The feature descriptor of the present invention includes a direction gradient histogram and an optical flow histogram;

[0046] X4: Randomly sample the set of feature descriptors in step X3 to obtain a subset;

[0047] X5: Reduce the dimensionality of all fe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-visual-angle action recognition method. The method includes the two processes of action training and action recognizing. In the action training process, a two-dimensional conditional random field method is used for training a classifier. The action recognition process includes the following steps that space-time interest points are extracted; feature descriptors are calculated; dimension reducing is performed on the feature descriptors; the feature descriptors are clustered to obtain a preprocessing file; the preprocessing file is sent into the classifier obtained in the training process. The space-time relation among the space-time interest points is sufficiently utilized, and features among different actions are effectively described; K-means clustering is adopted for clustering different actions into different categories, and therefore the distinction degree of action recognition is increased; through introducing the two-dimensional conditional random field, the time action sequence of a single camera and a space action sequence among multiple cameras are effectively modeled, so that a training model is more accurate, and then actions of the human body are effectively recognized.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a multi-view action recognition method. Background technique [0002] Using video cameras and computers to "see" instead of human eyes, that is, computer vision technology, is beginning to receive more and more attention. This technology can recognize images and videos and perform further processing by taking pictures with cameras and using preset algorithms in the computer. This technology attempts to establish an artificial intelligence system that obtains and processes information from images or videos. [0003] Moreover, with the increasing maturity of video monitoring technology and the popularization of monitoring equipment, the cost of monitoring equipment such as cameras is decreasing day by day, and the acquisition of video information becomes easier and more convenient, and the quality of video information is also getting higher and higher. Based on...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/64G06K9/46G06T7/00
Inventor 马华东傅慧源张征
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products