Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A human action recognition method in video based on the location information of interest points

A technology for point of interest and human action recognition, applied in the field of computer vision, can solve problems such as complex calculations and excessive memory requirements, and achieve complex calculations and high recognition accuracy

Inactive Publication Date: 2019-01-29
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method effectively solves the problems of complex calculations and excessive memory requirements in current human action recognition methods, and can achieve high recognition accuracy at the same time.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A human action recognition method in video based on the location information of interest points
  • A human action recognition method in video based on the location information of interest points
  • A human action recognition method in video based on the location information of interest points

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0072] Such as figure 1 shown. Firstly, for each video sequence in the video data set, the points of interest of human body movements in the video sequence are extracted; then, the location information of the points of interest is used to intelligently segment it, and the video is divided into several video segments. Then, for each video clip, calculate its interest point position distribution HoP descriptor, and use the HoP descriptor to represent the human body action of the video. The videos can then be trained and tested using methods such as support vector machines, nearest neighbor classifiers, etc. For each test video, it is also intelligently segmented to obtain the human action category to which each video segment belongs, and finally the human action with the highest frequency is taken as the human action represented by the test video.

[0073] Specifically include the following steps:

[0074] S1 For each video sequence in the video data set, extract points of in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human body action recognition method in a video based on position information of a point of interest, comprising the following steps: S1, for each video sequence in a video data set, extracting points of interest in the human body action in the video sequence; S2 using the human body action sense Points of interest intelligently segment the video sequence, and divide it into several video segments through video data; S3 calculates the position distribution Hop descriptor of the interest point of the human body action for each video segment, and the Hop descriptor represents the human body action of the video; S4 The Hop descriptor is used to represent each video segment for human motion training; S5 finally takes the human motion with the highest frequency as the human motion represented in the video data set. The present invention proposes a method for calculating the HoP descriptor by using the position information of the point of interest, which can effectively preserve the differences between different actions.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a human body action recognition method in a video based on position information of a point of interest. Background technique [0002] With the development of computer technology and multimedia technology, video has become the main carrier of information. In recent years, the increasing popularity of digital products and the rapid development of the Internet have made it easier to create and share videos. On the other hand, the popularity of video surveillance, the popularity of Microsoft Kinect motion-sensing game consoles, and the continuous development of human-computer interaction technology have also brought a variety of videos. Computer vision is playing an increasingly important role by combining video streams with computer processing, enabling computers to understand video information like humans. [0003] Human action recognition is an attractive and challengi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/32
CPCG06V40/23G06V40/20G06V10/25
Inventor 张见威朱林
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products