Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

First-visual-angle interactive action recognition method based on global and local network fusion

A local network, action recognition technology, applied in character and pattern recognition, computer parts, image analysis, etc., can solve problems such as inability to obtain high-precision recognition effects

Inactive Publication Date: 2018-08-17
NANJING UNIV OF SCI & TECH
View PDF3 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This makes the traditional single classifier method of action recognition unable to obtain high-precision recognition effect, so it is necessary to finely analyze the action features based on the combination of global and local methods to obtain efficient representation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • First-visual-angle interactive action recognition method based on global and local network fusion
  • First-visual-angle interactive action recognition method based on global and local network fusion
  • First-visual-angle interactive action recognition method based on global and local network fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] combine figure 1 , a method for first-person human-computer interaction video action recognition based on global and local network fusion, including the following steps:

[0019] Step 1. Sampling the video to obtain different actions, and obtaining 16 frames of images to form action samples;

[0020] Step 2, unify the size of the sampled action segments, and perform data enhancement, train a 3D convolutional network based on the global image as input, and learn the spatiotemporal features of the global action to obtain a network classification model;

[0021] Step 3, use sparse optical flow to locate the local area where the salient action occurs in the action segment;

[0022] Step 4. After uniformly processing the local areas of different actions, adjust the hyperparameters of the network, train a 3D convolutional network based on local images as input, and learn local salient action features to obtain a network classification model;

[0023] Step 5: Fuse the global...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a first-visual-angle interactive action recognition method based on global and local network fusion. The method comprises: a video is sampled to obtain different actions to obtain images and thus an action sample is formed; dimension unification processing is carried out on an action segment obtained by sampling, data enhancement is carried out, a 3D convolutional network based on a global image as an input is trained, and global spatio-temporal features of the action are learned to obtain a network classification model; a local area with significance action occurrence in the action segment is located by using a sparse optical flow; after dimension unification processing of local areas of different actions, hyperparameters of the network are adjusted, a 3D convolutional network based on a global image as an input is trained, local significant action features are learned to obtain a network classification model; and action samples are obtained by multiple samplingon the same video, statistics and ranking of the prediction numbers provided by the global and local models are carried out based on a voting method, and the type with the largest prediction numbersis used as an identified action tag.

Description

technical field [0001] The present invention relates to an interactive action recognition and image processing technology, in particular to a first-view interactive action recognition method based on global and local network fusion. Background technique [0002] In recent years, with the development of portable devices, the popularization of head-mounted cameras has produced more and more first-person videos, which brings the need to analyze human behavior from the first perspective. The first-person view video brings a new perspective to capture social interaction and object interaction, but the long-term action and unstructured shooting scene brought by the normally-on mode of the head camera makes the action of the first-view video Parsing becomes challenging. The interactive action in the first perspective includes two types, one is the self-motion from the observer, and the other is the action from the interactor. Often, the interaction affects the observer, so there ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06T7/269
CPCG06T7/269G06V40/20G06V20/46
Inventor 宋砚法羚玲唐金辉舒祥波
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products