Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth combined structuring and structuring learning method for mankind behavior identification

A joint structure and structured technology, applied in character and pattern recognition, instruments, computer parts, etc., can solve problems such as inability to recognize interactive behaviors, images that cannot be applied to multiple behavior categories, etc.

Active Publication Date: 2017-06-09
ZHEJIANG UNIV OF TECH
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to overcome the shortcomings of the existing human behavior recognition methods that cannot be applied to images of multiple behavior categories and cannot identify interactive behaviors, the present invention provides an image suitable for multiple behavior categories and capable of recognizing interactive behaviors for human beings. Deep Joint Structured and Structured Learning Approaches for Action Recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth combined structuring and structuring learning method for mankind behavior identification
  • Depth combined structuring and structuring learning method for mankind behavior identification
  • Depth combined structuring and structuring learning method for mankind behavior identification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] The present invention will be further described below.

[0048] A method for deep joint structured and structured learning for human action recognition comprising the following steps:

[0049] 1) Construct joint structure and structure formulation

[0050] Suppose there is a set of n training samples Here I represents an image, and a is a collection of behavior labels of all people in the image. If the image contains m individuals, then a=[a 1 ,...,a m ]. Matrix E=(e ij ) ∈ {0, 1} m×m is a strictly upper triangular matrix, representing the interrelationship structure of all individuals in the image. Specifically, e ij = 0 means there is no interaction between person i and person j, while e ij=1 indicates that person i and person j interact with each other. In fact, a and E can be considered as direct descriptions of human activities. With this representation, the recognition system is able to answer not only the question 1) what they are doing, but also the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A depth combined structuring and structuring learning method for mankind behavior identification comprises the following steps: 1, forming combined structure and structure formulation; 2, using a space network to extract deep convolution nerve network characteristics from a human body area in an image, using fc6 layer output of the space network as depth characteristics, using gradient histogram and optical flow histogram characteristics to further enhance characteristic expression, connecting CNN, HOG and HOF characteristics so as to express personal behavior or interaction relations in the image, using said characteristics to train two linearity support vector machine classifiers for each data set, and using combined characteristics to calculate combined characteristics in the formula (1); 3, training model parameters; 4, training and related inference in prediction, aiming at each training case to solve loss and enhance inference in each iteration period of the training process. The method is suitable for images of various behavior types, and can identify interaction behaviors.

Description

technical field [0001] The invention belongs to the field of behavior recognition in computer vision, and relates to a human behavior recognition method. Background technique [0002] Recognizing human actions in images or videos is a fundamental problem in computer vision, which is crucial in many applications such as motion video analysis, surveillance systems, and video retrieval. In recent work, deep learning has significantly improved the performance of action recognition. However, these works are not suitable for dealing with data containing multi-person interactions. First, they focus on assigning each image an action label, which is not suitable for images containing multiple action categories. Second, they ignore that the interrelationships between people provide important contextual information for recognizing complex human activities like handshakes, fights, and football games. Contents of the invention [0003] In order to overcome the shortcomings of the ex...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06K9/46G06K9/00
CPCG06V40/20G06V10/50G06F18/2411G06F18/214
Inventor 王振华金佳丽陈胜勇刘盛张剑华
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products