Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Human Behavior Recognition Method Based on the Fusion of Multi-feature Spatial-Temporal Relationships

A technology of time-space relationship and recognition method, applied in the field of computer vision, can solve the problems of time-space relationship information loss, time-consuming, high computational complexity, etc., and achieve the effect of improving recognition accuracy

Inactive Publication Date: 2018-08-28
SOUTHEAST UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] At present, one of the widely used and improved encoding methods is the Bag of Words (BoW for short) method. The classic BoW method first clusters the features, and then expresses the video as a frequency histogram of the features appearing in each centroid. Vector, although BoW coding has shown good generalization ability and robustness in many literatures, but this method also has many shortcomings: such as time-consuming feature clustering process, KMEANS algorithm supervised parameter k and the spatio-temporal relationship between centroids loss of information
Due to the need to construct multi-level centroids, the computational complexity of this method is relatively high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Human Behavior Recognition Method Based on the Fusion of Multi-feature Spatial-Temporal Relationships
  • A Human Behavior Recognition Method Based on the Fusion of Multi-feature Spatial-Temporal Relationships
  • A Human Behavior Recognition Method Based on the Fusion of Multi-feature Spatial-Temporal Relationships

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The preferred embodiments of the present invention are described in detail below, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, so as to define the protection scope of the present invention more clearly.

[0019] Embodiments of the present invention include: a human behavior recognition method based on multi-feature spatio-temporal relationship fusion, and the specific steps include:

[0020] Step 1: Perform dense trajectory feature extraction on the video, first perform feature point sampling in a dense grid. In order to make the collected feature points adapt to scale transformation, sampling will be performed in multiple grids with different spatial scales at the same time, and then the dense trajectory feature will track each sampling point by estimating the optical flow field of each frame, and each The sampling points only track L frames within their corresponding spatial scales, and finally...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human behavior recognition method based on the fusion of multi-feature spatio-temporal relations. The specific steps include: expressing the dense track features extracted from the video through the optical flow histogram and the motion boundary histogram, and then using the KMEANS algorithm to construct the two features Corresponding to the spatio-temporal bipartite graph between the centroids, the K-way bipartite graph segmentation technology is used to segment the spatio-temporal bipartite graph, and the representation method based on conditional probability is used to obtain the video-level code after the fusion of the two features, and finally the classifier is trained and recognized , through the above method, the present invention is a human behavior recognition method based on multi-feature spatio-temporal relationship fusion, the method calculates the spatio-temporal distance between the features in each video, thereby constructing the spatio-temporal bipartite graph between the centroids corresponding to the two features , the K-way bipartite graph segmentation technology is used to segment the spatio-temporal bipartite graph, and the centroid with strong spatio-temporal relationship is fused, which better excavates the effective information of different features and improves the recognition accuracy.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a human behavior recognition method based on multi-feature spatio-temporal relationship fusion. Background technique [0002] With the development of computer science, video has become a part of people's lives. How to make computers "understand" human behavior in video plays an important role in the fields of content-based video retrieval, intelligent monitoring, human-computer interaction and virtual reality. [0003] Generally speaking, a classic human action recognition framework mainly includes three steps: feature extraction, video coding, and classifier training and recognition. In addition, for the case of using multiple features, it also includes an optional multi-feature pre-fusion Or post-fusion step, where video encoding is a key step in determining the accuracy of recognition. [0004] At present, one of the widely used and improved encoding methods is the Bag of Words...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/42G06V20/49G06F18/23213G06F18/2411
Inventor 姚莉
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products