Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target-assisted action recognition method based on graph neural network

An action recognition and neural network technology, applied in the field of target-assisted action recognition, can solve the problem of low accuracy of video action recognition, and achieve the effect of improving accuracy and accuracy

Active Publication Date: 2019-12-03
XI AN JIAOTONG UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a target-assisted action recognition method based on a graph neural network to solve the above-mentioned technical problem of low video action recognition accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target-assisted action recognition method based on graph neural network
  • Target-assisted action recognition method based on graph neural network
  • Target-assisted action recognition method based on graph neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0054] see figure 1 , figure 1 It is a public data set Object-Charades to verify the feasibility of the method of the present invention: this data set is a large-scale multi-label video data set, the actions in it include character interaction, and the true value information of the video includes the actions of the video and the actions in the video. Bounding boxes of people and interacting objects within each frame, detected with a pretrained object detector. The data set contains 52 types of actions and more than 7,000 videos. The average length of each video is about 30 seconds, and the scenes where the actions take place are all indoors. Such as figure 1 As shown, each image represents a video, which contains bounding boxes of people and interactive objects, and below the image is the action label of the video.

[0055] see figure 2 , a target-assisted action recognition method based on a graph neural network in an embodiment of the present invention, specifically com...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a target-assisted action recognition method based on a graph neural network. The method comprises the following steps: calculating depth features of each frame of a video by using a deep neural network; extracting features of targets corresponding to each target bounding box in the video frame according to the depth features of each frame, enabling the targets to have an interaction relationship or have a corresponding relationship before and after time, and constructing a graph model by utilizing the features of the targets and the relationship between the targets; then, two mapping functions are constructed to automatically calculate the similarity between any two nodes, information interaction in the iterative updating process of node feature information on the graph model is controlled through the similarity, and iterative updating is conducted on features of the nodes on the graph model; and finally, performing action classification by utilizing the updatedtarget features and the features of the original video to realize action recognition of the video, so that the accuracy of action recognition can be improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision and pattern recognition, and in particular relates to a target-assisted action recognition method based on a graph neural network. Background technique [0002] As a key step in video processing, video action recognition has a great impact on video analysis and processing, and has important research value in theory and practical applications. At present, the existing video action recognition technology generally has the following problems: (1) Most video action recognition methods are based on the deep neural network to extract the features of the video, and then classify the video features; This method does not consider the relationship between objects and frames in the video, which will lead to poor classification robustness. (2) Through dense sampling of video frames, the timing correlation between frames is used to construct a time map to assist action localization; this time map mode...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/10G06V40/20G06F18/241
Inventor 王乐翟长波谭浩亮
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products