Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-feature fusion behavior identification method based on key frame

A multi-feature fusion and recognition method technology, which is applied in the field of multi-feature fusion behavior recognition based on key frames of human motion sequences, can solve the problems of missing target information, difficulty in realizing, and video features that cannot accurately express video information, etc., and achieve detailed The effect of reducing the subtle differences and improving the accuracy of recognition

Active Publication Date: 2019-08-06
NORTHWEST UNIV(CN)
View PDF5 Cites 58 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, with the continuous improvement of video complexity, a single video feature can no longer accurately express the required video information.
Moreover, with the continuous increase of the amount of video data and information, in the process of behavior recognition, we miss important target information due to the existence of redundant data, and the detection of huge amounts of data one by one is contrary to Principles of Video Analysis and Difficult to Implement

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-feature fusion behavior identification method based on key frame
  • Multi-feature fusion behavior identification method based on key frame
  • Multi-feature fusion behavior identification method based on key frame

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The technical solution of the present invention will be described in detail below in conjunction with the embodiments and the accompanying drawings, but is not limited thereto.

[0047] The present invention is developed on the Ubuntu16.04 system, the system is equipped with GeForce video memory, and the experiment is configured

[0048] OpenCV3.1.0, python and other tools required in the process have built an openpose pose extraction library locally.

[0049] A kind of key frame-based multi-feature behavior recognition method of the present invention, such as figure 1 shown, including the following steps:

[0050] Step 1. Input the video into the openpose pose extraction library to extract the joint point information of the human body in the video. Each human body contains 2D coordinate information of 18 joint points. The human skeleton representation and index are as follows figure 2 Shown, and the joint point coordinates and position sequence of each frame are def...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A multi-feature fusion behavior identification method based on a key frame comprises the following steps of firstly, extracting a joint point feature vector x (i) of a human body in a video through anopenpose human body posture extraction library to form a sequence S = {x (1), x (2),..., x (N)}; secondly, using a K-means algorithm to obtain K final clustering centers c '= {c' | i = 1, 2,..., K},extracting a frame closest to each clustering center as a key frame of the video, and obtaining a key frame sequence F = {Fii | i = 1, 2,..., K}; and then obtaining the RGB information, optical flow information and skeleton information of the key frame, processing the information, and then inputting the processed information into a double-flow convolutional network model to obtain the higher-levelfeature expression of the RGB information and the optical flow information, and inputting the skeleton information into a space-time diagram convolutional network model to construct the space-time diagram expression features of the skeleton; and then fusing the softmax output results of the network to obtain a final identification result. According to the process, the influences, such as the timeconsumption, accuracy reduction, etc., caused by redundant frames can be well avoided, and then the information in the video can be better utilized to express the behaviors, so that the recognition accuracy is further improved.

Description

technical field [0001] The invention belongs to the technical fields of computer graphics and human-computer interaction, and in particular relates to a multi-feature fusion behavior recognition method based on key frames of human motion sequences. Background technique [0002] Vision is the most important medium for information transmission in human activities. Studies have found that about 80% of information is obtained through vision. In recent years, with the development of computer technology, especially the rapid popularization of the Internet, the subject of computer vision has become one of the most active and popular subjects in the computer field. Computer Vision (Computer Vision) refers to the machine vision that uses cameras and computers to simulate human vision to identify, track, measure, etc. targets, and further image processing through recognition and analysis. Human action recognition, as an emerging research field in computer vision, has been extensively...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06V20/41G06V20/46G06F18/23213
Inventor 高岭何丹赵悦蓉周俊鹏郑勇张侃郭红波王海
Owner NORTHWEST UNIV(CN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products