Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A multi-view action recognition method

An action recognition, multi-view technology, applied in the field of computer vision, can solve the problems of poor recognition effect, change of motion duration and noise sensitivity, complex motion can not get good results, etc., to achieve high accuracy and effective implementation. The effect of identifying and increasing the degree of discrimination

Active Publication Date: 2018-02-06
BEIJING UNIV OF POSTS & TELECOMM
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0012] Because the existing technology has poor behavior recognition effect on subtle differences, is sensitive to changes in motion duration and noise, and adapts to observation sequences with dependencies, and cannot get good results for more complex motions, only one dimensional linear conditional random field, the present invention proposes a multi-view action recognition method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-view action recognition method
  • A multi-view action recognition method
  • A multi-view action recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The structure of the present invention will be explained in detail below in conjunction with the accompanying drawings.

[0041] A multi-view action recognition method includes two processes of action training and action recognition.

[0042] Such as figure 1 As shown, the action training process includes the following steps:

[0043] X1: Manually mark the training video files, with a total of 4 viewing angles and 10 types of actions;

[0044] X2: To extract spatio-temporal interest points from the training video files, the present invention adopts methods such as Gaussian filtering and Gabor filtering;

[0045] X3: Calculate the feature descriptor of the area where the spatio-temporal interest point is located. The feature descriptor of the present invention includes a direction gradient histogram and an optical flow histogram;

[0046] X4: Randomly sample the set of feature descriptors in step X3 to obtain a subset;

[0047] X5: Reduce the dimensionality of all fe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-view action recognition method, which includes two processes of action training and action recognition. During action training, the classifier is trained by the two-dimensional conditional random field method; the action recognition process includes the following steps: extracting spatiotemporal interest points; calculating feature descriptors; feature descriptor dimensionality reduction; feature descriptor clustering, and obtaining preprocessed files; Feed the preprocessed file to the classifier obtained during training. The present invention makes full use of the spatio-temporal relationship between spatio-temporal interest points, and effectively describes the characteristics of different actions; adopts K-means clustering to gather different actions into different categories, increasing the discrimination of action recognition; by introducing The two-dimensional conditional random field can effectively model the temporal action sequence under a single camera and the spatial action sequence between multiple cameras, making the training model more accurate, thereby realizing the effective recognition of human action.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a multi-view action recognition method. Background technique [0002] Using video cameras and computers to "see" instead of human eyes, that is, computer vision technology, is beginning to receive more and more attention. This technology can recognize images and videos and perform further processing by taking pictures with cameras and using preset algorithms in the computer. This technology attempts to establish an artificial intelligence system that obtains and processes information from images or videos. [0003] Moreover, with the increasing maturity of video monitoring technology and the popularization of monitoring equipment, the cost of monitoring equipment such as cameras is decreasing day by day, and the acquisition of video information becomes easier and more convenient, and the quality of video information is also getting higher and higher. Based on...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/64G06K9/46G06T7/00
Inventor 马华东傅慧源张征
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products