Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A behavior recognition method of depth supervised convolution neural network based on training feature fusion

A convolutional neural network and feature fusion technology, applied in the field of artificial intelligence computer vision, can solve the problem of missing local information in video space

Active Publication Date: 2019-03-08
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF5 Cites 50 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method will lose the spatial local information of the video to a certain extent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A behavior recognition method of depth supervised convolution neural network based on training feature fusion
  • A behavior recognition method of depth supervised convolution neural network based on training feature fusion
  • A behavior recognition method of depth supervised convolution neural network based on training feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0087] The specific implementation method of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0088]Execution environment of the present invention is to have computer to realize following three main functions to form: 1, multi-layer convolution feature extraction function, this function is to extract the multi-layer feature map of each frame of video. 2. Feature aggregation function, including the local evolution description pooling layer, the function of this layer is to encode the multi-frame feature map obtained by each layer into a local evolution descriptor; and the VLAD encoding layer based on the local evolution descriptor, the layer's The function is to encode local evolution descriptors into meta-action based video-level representations. 3. The method of deep supervised action recognition. The function of this method is to use the multi-layered video-level representation obtained above to identify the act...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a behavior recognition method of depth supervised convolution neural network based on training feature fusion, belonging to the artificial intelligence computer vision field. This method extracts multi-layer convolution features of target video, designs local evolutionary pooling layer, and maps video convolution features to a vector containing time information by using local evolutionary pooling layer, thus extracts local evolutionary descriptors of target video. The local evolutionary descriptors of target video are extracted by using local evolutionary pooling layer.By using VLAD coding method, multiple local evolutionary descriptors are coded into meta-action based video level representations. Based on the complementarity of the information among the multiple levels of convolution network, the final classification results are obtained by integrating the results of the multiple levels of convolution network. The invention fully utilizes the time information to construct the video level representation, and effectively improves the accuracy of the video behavior recognition. At the same time, the performance of the whole network is improved by integrating the multi-level prediction results to improve the discriminability of the middle layer of the network.

Description

technical field [0001] The invention relates to a video-based behavior recognition method, in particular to a deep convolutional neural network behavior recognition method based on training feature fusion, which belongs to the field of artificial intelligence computer vision. Background technique [0002] At present, human action recognition is a research hotspot in the field of intelligent video analysis, and it is also an important research direction for video understanding tasks. In recent years, it has gained widespread attention in video surveillance, abnormal event detection, and content-based video retrieval. However, due to the complexity and variability of human behavior, the interference of video background information and other factors, how to build an appropriate spatio-temporal level representation for video becomes the key. [0003] Early research mainly focused on recognizing simple actions in ideal scenes, using behavior recognition methods based on artifici...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/41G06N3/045G06F18/253
Inventor 李侃李杨王欣欣
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products